Test Report: Docker_Linux_crio_arm64 19763

                    
                      aa5eddb378ec81f2e43c808f5204b861e96187fd:2024-10-07:36541
                    
                

Test fail (4/328)

Order failed test Duration
32 TestAddons/serial/GCPAuth/PullSecret 480.82
35 TestAddons/parallel/Ingress 153.14
37 TestAddons/parallel/MetricsServer 347.92
174 TestMultiControlPlane/serial/RestartCluster 139.74
x
+
TestAddons/serial/GCPAuth/PullSecret (480.82s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-504513 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-504513 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [603bf7a0-7f9c-4a72-985b-e5db3c9ca21c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:329: TestAddons/serial/GCPAuth/PullSecret: WARNING: pod list for "default" "integration-test=busybox" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:627: ***** TestAddons/serial/GCPAuth/PullSecret: pod "integration-test=busybox" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:627: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-504513 -n addons-504513
addons_test.go:627: TestAddons/serial/GCPAuth/PullSecret: showing logs for failed pods as of 2024-10-07 12:09:31.579580706 +0000 UTC m=+733.932821555
addons_test.go:627: (dbg) Run:  kubectl --context addons-504513 describe po busybox -n default
addons_test.go:627: (dbg) kubectl --context addons-504513 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-504513/192.168.58.2
Start Time:       Mon, 07 Oct 2024 12:01:31 +0000
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               10.244.0.21
IPs:
IP:  10.244.0.21
Containers:
busybox:
Container ID:  
Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sleep
3600
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-phgdd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-phgdd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m                      default-scheduler  Successfully assigned default/busybox to addons-504513
Normal   Pulling    6m30s (x4 over 8m)      kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning  Failed     6m30s (x4 over 8m)      kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
Warning  Failed     6m30s (x4 over 8m)      kubelet            Error: ErrImagePull
Warning  Failed     6m15s (x6 over 7m59s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    2m57s (x21 over 7m59s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
addons_test.go:627: (dbg) Run:  kubectl --context addons-504513 logs busybox -n default
addons_test.go:627: (dbg) Non-zero exit: kubectl --context addons-504513 logs busybox -n default: exit status 1 (112.640404ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:627: kubectl --context addons-504513 logs busybox -n default: exit status 1
addons_test.go:629: wait: integration-test=busybox within 8m0s: context deadline exceeded
--- FAIL: TestAddons/serial/GCPAuth/PullSecret (480.82s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (153.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-504513 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-504513 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-504513 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6fff75f6-93f3-4a2a-9897-d6c921462620] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6fff75f6-93f3-4a2a-9897-d6c921462620] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003452899s
I1007 12:11:00.791477 1178462 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-504513 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-504513 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.081909041s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-504513 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-504513 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.58.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-504513
helpers_test.go:235: (dbg) docker inspect addons-504513:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "98bc47ee472dc808320d44cc1071573848e28022b6dae187fb3e2cc6aff79901",
	        "Created": "2024-10-07T11:58:04.033530051Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1179822,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-07T11:58:04.171244449Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/98bc47ee472dc808320d44cc1071573848e28022b6dae187fb3e2cc6aff79901/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/98bc47ee472dc808320d44cc1071573848e28022b6dae187fb3e2cc6aff79901/hostname",
	        "HostsPath": "/var/lib/docker/containers/98bc47ee472dc808320d44cc1071573848e28022b6dae187fb3e2cc6aff79901/hosts",
	        "LogPath": "/var/lib/docker/containers/98bc47ee472dc808320d44cc1071573848e28022b6dae187fb3e2cc6aff79901/98bc47ee472dc808320d44cc1071573848e28022b6dae187fb3e2cc6aff79901-json.log",
	        "Name": "/addons-504513",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-504513:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-504513",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d2e8a2f84ab49a114e991e24dd187b2ac0e96d8fd4ece15acb5092af38d18515-init/diff:/var/lib/docker/overlay2/679cc8fccbb0902884eb141037cc21fc6e7a2efac609a53e07ea6b92675ef1c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d2e8a2f84ab49a114e991e24dd187b2ac0e96d8fd4ece15acb5092af38d18515/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d2e8a2f84ab49a114e991e24dd187b2ac0e96d8fd4ece15acb5092af38d18515/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d2e8a2f84ab49a114e991e24dd187b2ac0e96d8fd4ece15acb5092af38d18515/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-504513",
	                "Source": "/var/lib/docker/volumes/addons-504513/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-504513",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-504513",
	                "name.minikube.sigs.k8s.io": "addons-504513",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2030c59475cbb20250f1152a5ce51d3293664eff342a56f7429e48c868124201",
	            "SandboxKey": "/var/run/docker/netns/2030c59475cb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34247"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34248"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34251"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34249"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34250"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-504513": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null,
	                    "NetworkID": "160722c35aa7eda7eed5d217de65189c1b1c5c2374872a33482a67b09fd2b7e1",
	                    "EndpointID": "ffc5a0c61c43377350cf42ab1a3675abf1fdf6ded06c3f8debe26cecdf627b13",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-504513",
	                        "98bc47ee472d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-504513 -n addons-504513
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-504513 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-504513 logs -n 25: (1.485321213s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC | 07 Oct 24 11:57 UTC |
	| delete  | -p download-only-513494              | download-only-513494   | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC | 07 Oct 24 11:57 UTC |
	| start   | -o=json --download-only              | download-only-459102   | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC |                     |
	|         | -p download-only-459102              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC | 07 Oct 24 11:57 UTC |
	| delete  | -p download-only-459102              | download-only-459102   | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC | 07 Oct 24 11:57 UTC |
	| delete  | -p download-only-513494              | download-only-513494   | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC | 07 Oct 24 11:57 UTC |
	| delete  | -p download-only-459102              | download-only-459102   | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC | 07 Oct 24 11:57 UTC |
	| start   | --download-only -p                   | download-docker-790369 | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC |                     |
	|         | download-docker-790369               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-790369            | download-docker-790369 | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC | 07 Oct 24 11:57 UTC |
	| start   | --download-only -p                   | binary-mirror-325982   | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC |                     |
	|         | binary-mirror-325982                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33869               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-325982              | binary-mirror-325982   | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC | 07 Oct 24 11:57 UTC |
	| addons  | disable dashboard -p                 | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC |                     |
	|         | addons-504513                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC |                     |
	|         | addons-504513                        |                        |         |         |                     |                     |
	| start   | -p addons-504513 --wait=true         | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC | 07 Oct 24 12:01 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	| addons  | addons-504513 addons disable         | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:01 UTC | 07 Oct 24 12:01 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-504513 addons disable         | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:09 UTC | 07 Oct 24 12:09 UTC |
	|         | gcp-auth --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:09 UTC | 07 Oct 24 12:09 UTC |
	|         | -p addons-504513                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-504513 addons disable         | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:09 UTC | 07 Oct 24 12:10 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-504513 ip                     | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:09 UTC | 07 Oct 24 12:09 UTC |
	| addons  | addons-504513 addons disable         | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:09 UTC | 07 Oct 24 12:09 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-504513 addons                 | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:10 UTC | 07 Oct 24 12:10 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-504513 addons                 | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:10 UTC | 07 Oct 24 12:10 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-504513 addons                 | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:10 UTC | 07 Oct 24 12:10 UTC |
	|         | disable inspektor-gadget             |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ssh     | addons-504513 ssh curl -s            | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-504513 ip                     | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:13 UTC | 07 Oct 24 12:13 UTC |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:57:57
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:57:57.259836 1179332 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:57:57.260029 1179332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:57:57.260055 1179332 out.go:358] Setting ErrFile to fd 2...
	I1007 11:57:57.260075 1179332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:57:57.260505 1179332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1173066/.minikube/bin
	I1007 11:57:57.261102 1179332 out.go:352] Setting JSON to false
	I1007 11:57:57.262044 1179332 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27622,"bootTime":1728274656,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1007 11:57:57.262163 1179332 start.go:139] virtualization:  
	I1007 11:57:57.264826 1179332 out.go:177] * [addons-504513] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 11:57:57.266875 1179332 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 11:57:57.266933 1179332 notify.go:220] Checking for updates...
	I1007 11:57:57.269994 1179332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:57:57.271490 1179332 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-1173066/kubeconfig
	I1007 11:57:57.273049 1179332 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1173066/.minikube
	I1007 11:57:57.274556 1179332 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 11:57:57.275949 1179332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 11:57:57.277874 1179332 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:57:57.297351 1179332 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 11:57:57.297481 1179332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 11:57:57.351160 1179332 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2024-10-07 11:57:57.341916389 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 11:57:57.351283 1179332 docker.go:318] overlay module found
	I1007 11:57:57.353198 1179332 out.go:177] * Using the docker driver based on user configuration
	I1007 11:57:57.354894 1179332 start.go:297] selected driver: docker
	I1007 11:57:57.354913 1179332 start.go:901] validating driver "docker" against <nil>
	I1007 11:57:57.354928 1179332 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 11:57:57.355592 1179332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 11:57:57.397830 1179332 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2024-10-07 11:57:57.388354841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 11:57:57.398052 1179332 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 11:57:57.398268 1179332 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 11:57:57.400517 1179332 out.go:177] * Using Docker driver with root privileges
	I1007 11:57:57.402404 1179332 cni.go:84] Creating CNI manager for ""
	I1007 11:57:57.402465 1179332 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 11:57:57.402479 1179332 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 11:57:57.402560 1179332 start.go:340] cluster config:
	{Name:addons-504513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-504513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:57:57.405228 1179332 out.go:177] * Starting "addons-504513" primary control-plane node in "addons-504513" cluster
	I1007 11:57:57.408474 1179332 cache.go:121] Beginning downloading kic base image for docker with crio
	I1007 11:57:57.410778 1179332 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 11:57:57.413157 1179332 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:57:57.413221 1179332 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-1173066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1007 11:57:57.413225 1179332 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 11:57:57.413232 1179332 cache.go:56] Caching tarball of preloaded images
	I1007 11:57:57.413315 1179332 preload.go:172] Found /home/jenkins/minikube-integration/19763-1173066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1007 11:57:57.413325 1179332 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 11:57:57.413656 1179332 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/config.json ...
	I1007 11:57:57.413683 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/config.json: {Name:mk638eb9b68aa8610ca27e26c5001fd39eddfc00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:57:57.430772 1179332 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1007 11:57:57.430794 1179332 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1007 11:57:57.430810 1179332 cache.go:194] Successfully downloaded all kic artifacts
	I1007 11:57:57.430843 1179332 start.go:360] acquireMachinesLock for addons-504513: {Name:mkbbf38566c8131810ffc8f50dd67d6eb8acc9e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:57:57.431323 1179332 start.go:364] duration metric: took 452.017µs to acquireMachinesLock for "addons-504513"
	I1007 11:57:57.431355 1179332 start.go:93] Provisioning new machine with config: &{Name:addons-504513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-504513 Namespace:default APIServerHAVIP: APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 11:57:57.431430 1179332 start.go:125] createHost starting for "" (driver="docker")
	I1007 11:57:57.433919 1179332 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1007 11:57:57.434146 1179332 start.go:159] libmachine.API.Create for "addons-504513" (driver="docker")
	I1007 11:57:57.434177 1179332 client.go:168] LocalClient.Create starting
	I1007 11:57:57.434274 1179332 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem
	I1007 11:57:57.826989 1179332 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/cert.pem
	I1007 11:57:58.382582 1179332 cli_runner.go:164] Run: docker network inspect addons-504513 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1007 11:57:58.397388 1179332 cli_runner.go:211] docker network inspect addons-504513 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1007 11:57:58.397471 1179332 network_create.go:284] running [docker network inspect addons-504513] to gather additional debugging logs...
	I1007 11:57:58.397492 1179332 cli_runner.go:164] Run: docker network inspect addons-504513
	W1007 11:57:58.410610 1179332 cli_runner.go:211] docker network inspect addons-504513 returned with exit code 1
	I1007 11:57:58.410647 1179332 network_create.go:287] error running [docker network inspect addons-504513]: docker network inspect addons-504513: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-504513 not found
	I1007 11:57:58.410661 1179332 network_create.go:289] output of [docker network inspect addons-504513]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-504513 not found
	
	** /stderr **
	I1007 11:57:58.410771 1179332 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 11:57:58.426157 1179332 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fa98f111c271 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:cf:52:8b:17} reservation:<nil>}
	I1007 11:57:58.426539 1179332 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ddde00}
	I1007 11:57:58.426567 1179332 network_create.go:124] attempt to create docker network addons-504513 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1007 11:57:58.426623 1179332 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-504513 addons-504513
	I1007 11:57:58.499525 1179332 network_create.go:108] docker network addons-504513 192.168.58.0/24 created
	I1007 11:57:58.499557 1179332 kic.go:121] calculated static IP "192.168.58.2" for the "addons-504513" container
	I1007 11:57:58.499628 1179332 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1007 11:57:58.514873 1179332 cli_runner.go:164] Run: docker volume create addons-504513 --label name.minikube.sigs.k8s.io=addons-504513 --label created_by.minikube.sigs.k8s.io=true
	I1007 11:57:58.531175 1179332 oci.go:103] Successfully created a docker volume addons-504513
	I1007 11:57:58.531259 1179332 cli_runner.go:164] Run: docker run --rm --name addons-504513-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-504513 --entrypoint /usr/bin/test -v addons-504513:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib
	I1007 11:57:59.668982 1179332 cli_runner.go:217] Completed: docker run --rm --name addons-504513-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-504513 --entrypoint /usr/bin/test -v addons-504513:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib: (1.13765622s)
	I1007 11:57:59.669013 1179332 oci.go:107] Successfully prepared a docker volume addons-504513
	I1007 11:57:59.669042 1179332 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:57:59.669062 1179332 kic.go:194] Starting extracting preloaded images to volume ...
	I1007 11:57:59.669137 1179332 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19763-1173066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-504513:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir
	I1007 11:58:03.962631 1179332 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19763-1173066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-504513:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir: (4.293440058s)
	I1007 11:58:03.962663 1179332 kic.go:203] duration metric: took 4.293597727s to extract preloaded images to volume ...
	W1007 11:58:03.962816 1179332 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1007 11:58:03.962933 1179332 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1007 11:58:04.018697 1179332 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-504513 --name addons-504513 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-504513 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-504513 --network addons-504513 --ip 192.168.58.2 --volume addons-504513:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122
	I1007 11:58:04.331149 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Running}}
	I1007 11:58:04.362239 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:04.384047 1179332 cli_runner.go:164] Run: docker exec addons-504513 stat /var/lib/dpkg/alternatives/iptables
	I1007 11:58:04.473340 1179332 oci.go:144] the created container "addons-504513" has a running status.
	I1007 11:58:04.473426 1179332 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa...
	I1007 11:58:05.637051 1179332 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1007 11:58:05.656494 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:05.673271 1179332 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1007 11:58:05.673294 1179332 kic_runner.go:114] Args: [docker exec --privileged addons-504513 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1007 11:58:05.730703 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:05.747058 1179332 machine.go:93] provisionDockerMachine start ...
	I1007 11:58:05.747154 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:05.763125 1179332 main.go:141] libmachine: Using SSH client type: native
	I1007 11:58:05.763407 1179332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 34247 <nil> <nil>}
	I1007 11:58:05.763422 1179332 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 11:58:05.895755 1179332 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-504513
	
	I1007 11:58:05.895782 1179332 ubuntu.go:169] provisioning hostname "addons-504513"
	I1007 11:58:05.895858 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:05.912670 1179332 main.go:141] libmachine: Using SSH client type: native
	I1007 11:58:05.912917 1179332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 34247 <nil> <nil>}
	I1007 11:58:05.912934 1179332 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-504513 && echo "addons-504513" | sudo tee /etc/hostname
	I1007 11:58:06.061418 1179332 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-504513
	
	I1007 11:58:06.061582 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:06.078853 1179332 main.go:141] libmachine: Using SSH client type: native
	I1007 11:58:06.079121 1179332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 34247 <nil> <nil>}
	I1007 11:58:06.079138 1179332 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-504513' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-504513/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-504513' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 11:58:06.212160 1179332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 11:58:06.212187 1179332 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19763-1173066/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-1173066/.minikube}
	I1007 11:58:06.212208 1179332 ubuntu.go:177] setting up certificates
	I1007 11:58:06.212218 1179332 provision.go:84] configureAuth start
	I1007 11:58:06.212304 1179332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-504513
	I1007 11:58:06.228721 1179332 provision.go:143] copyHostCerts
	I1007 11:58:06.228808 1179332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.pem (1078 bytes)
	I1007 11:58:06.228928 1179332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-1173066/.minikube/cert.pem (1123 bytes)
	I1007 11:58:06.228991 1179332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-1173066/.minikube/key.pem (1675 bytes)
	I1007 11:58:06.229045 1179332 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca-key.pem org=jenkins.addons-504513 san=[127.0.0.1 192.168.58.2 addons-504513 localhost minikube]
	I1007 11:58:06.520780 1179332 provision.go:177] copyRemoteCerts
	I1007 11:58:06.520878 1179332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 11:58:06.520941 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:06.537293 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:06.633227 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 11:58:06.659334 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 11:58:06.683872 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1007 11:58:06.707099 1179332 provision.go:87] duration metric: took 494.866884ms to configureAuth
	I1007 11:58:06.707126 1179332 ubuntu.go:193] setting minikube options for container-runtime
	I1007 11:58:06.707319 1179332 config.go:182] Loaded profile config "addons-504513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:58:06.707428 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:06.724324 1179332 main.go:141] libmachine: Using SSH client type: native
	I1007 11:58:06.724570 1179332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 34247 <nil> <nil>}
	I1007 11:58:06.724596 1179332 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 11:58:06.962634 1179332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 11:58:06.962662 1179332 machine.go:96] duration metric: took 1.215582058s to provisionDockerMachine
	I1007 11:58:06.962675 1179332 client.go:171] duration metric: took 9.528486941s to LocalClient.Create
	I1007 11:58:06.962688 1179332 start.go:167] duration metric: took 9.528542227s to libmachine.API.Create "addons-504513"
	I1007 11:58:06.962696 1179332 start.go:293] postStartSetup for "addons-504513" (driver="docker")
	I1007 11:58:06.962707 1179332 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 11:58:06.962774 1179332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 11:58:06.962817 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:06.979075 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:07.077773 1179332 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 11:58:07.081200 1179332 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 11:58:07.081284 1179332 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 11:58:07.081300 1179332 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 11:58:07.081308 1179332 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 11:58:07.081319 1179332 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-1173066/.minikube/addons for local assets ...
	I1007 11:58:07.081391 1179332 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-1173066/.minikube/files for local assets ...
	I1007 11:58:07.081416 1179332 start.go:296] duration metric: took 118.714604ms for postStartSetup
	I1007 11:58:07.081750 1179332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-504513
	I1007 11:58:07.098388 1179332 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/config.json ...
	I1007 11:58:07.098680 1179332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 11:58:07.098734 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:07.115160 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:07.204691 1179332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 11:58:07.208888 1179332 start.go:128] duration metric: took 9.777437366s to createHost
	I1007 11:58:07.208915 1179332 start.go:83] releasing machines lock for "addons-504513", held for 9.777576491s
	I1007 11:58:07.209016 1179332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-504513
	I1007 11:58:07.224626 1179332 ssh_runner.go:195] Run: cat /version.json
	I1007 11:58:07.224684 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:07.224746 1179332 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 11:58:07.224824 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:07.242750 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:07.246874 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:07.469577 1179332 ssh_runner.go:195] Run: systemctl --version
	I1007 11:58:07.473829 1179332 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 11:58:07.614302 1179332 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 11:58:07.618575 1179332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 11:58:07.637809 1179332 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1007 11:58:07.637884 1179332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 11:58:07.674599 1179332 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1007 11:58:07.674666 1179332 start.go:495] detecting cgroup driver to use...
	I1007 11:58:07.674714 1179332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 11:58:07.674792 1179332 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 11:58:07.692306 1179332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 11:58:07.703769 1179332 docker.go:217] disabling cri-docker service (if available) ...
	I1007 11:58:07.703892 1179332 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 11:58:07.718727 1179332 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 11:58:07.734353 1179332 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 11:58:07.816646 1179332 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 11:58:07.911740 1179332 docker.go:233] disabling docker service ...
	I1007 11:58:07.911813 1179332 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 11:58:07.933244 1179332 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 11:58:07.945676 1179332 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 11:58:08.030678 1179332 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 11:58:08.126405 1179332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 11:58:08.139249 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 11:58:08.157455 1179332 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 11:58:08.157530 1179332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:58:08.167765 1179332 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 11:58:08.167838 1179332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:58:08.177987 1179332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:58:08.188064 1179332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:58:08.198652 1179332 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 11:58:08.207828 1179332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:58:08.217489 1179332 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:58:08.233630 1179332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:58:08.243254 1179332 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 11:58:08.251854 1179332 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 11:58:08.260111 1179332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:58:08.344806 1179332 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 11:58:08.458458 1179332 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 11:58:08.458543 1179332 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 11:58:08.462407 1179332 start.go:563] Will wait 60s for crictl version
	I1007 11:58:08.462520 1179332 ssh_runner.go:195] Run: which crictl
	I1007 11:58:08.465975 1179332 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 11:58:08.503763 1179332 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1007 11:58:08.503869 1179332 ssh_runner.go:195] Run: crio --version
	I1007 11:58:08.545387 1179332 ssh_runner.go:195] Run: crio --version
	I1007 11:58:08.590991 1179332 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1007 11:58:08.593005 1179332 cli_runner.go:164] Run: docker network inspect addons-504513 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 11:58:08.609477 1179332 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1007 11:58:08.613043 1179332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 11:58:08.623688 1179332 kubeadm.go:883] updating cluster {Name:addons-504513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-504513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 11:58:08.623806 1179332 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:58:08.623870 1179332 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:58:08.694717 1179332 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 11:58:08.694744 1179332 crio.go:433] Images already preloaded, skipping extraction
	I1007 11:58:08.694800 1179332 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:58:08.729925 1179332 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 11:58:08.729951 1179332 cache_images.go:84] Images are preloaded, skipping loading
	I1007 11:58:08.729960 1179332 kubeadm.go:934] updating node { 192.168.58.2 8443 v1.31.1 crio true true} ...
	I1007 11:58:08.730059 1179332 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-504513 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-504513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 11:58:08.730150 1179332 ssh_runner.go:195] Run: crio config
	I1007 11:58:08.778024 1179332 cni.go:84] Creating CNI manager for ""
	I1007 11:58:08.778051 1179332 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 11:58:08.778063 1179332 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 11:58:08.778107 1179332 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-504513 NodeName:addons-504513 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 11:58:08.778268 1179332 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-504513"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 11:58:08.778336 1179332 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 11:58:08.786816 1179332 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 11:58:08.786908 1179332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 11:58:08.795644 1179332 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1007 11:58:08.814187 1179332 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 11:58:08.832777 1179332 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I1007 11:58:08.850687 1179332 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1007 11:58:08.854082 1179332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 11:58:08.864753 1179332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:58:08.946593 1179332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:58:08.960489 1179332 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513 for IP: 192.168.58.2
	I1007 11:58:08.960557 1179332 certs.go:194] generating shared ca certs ...
	I1007 11:58:08.960590 1179332 certs.go:226] acquiring lock for ca certs: {Name:mk2f3e101c3a8a21aa5a00b0d7100cac880b0543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:08.961281 1179332 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.key
	I1007 11:58:09.201198 1179332 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt ...
	I1007 11:58:09.201235 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt: {Name:mkf68ff1cbb7887c29e41ff1a4dab11b8e1f363e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:09.201435 1179332 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.key ...
	I1007 11:58:09.201448 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.key: {Name:mk6ff50bb1e6fdc479ab8c15639619b2dbd94d5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:09.201542 1179332 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.key
	I1007 11:58:09.678906 1179332 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.crt ...
	I1007 11:58:09.678941 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.crt: {Name:mkfbafd89c0d50c6f2f3617fd5a4855be4a25abb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:09.679755 1179332 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.key ...
	I1007 11:58:09.679775 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.key: {Name:mk1479ad37fb89b924eaee5a96c9dc3da37f8f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:09.679899 1179332 certs.go:256] generating profile certs ...
	I1007 11:58:09.679967 1179332 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.key
	I1007 11:58:09.679994 1179332 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt with IP's: []
	I1007 11:58:10.029683 1179332 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt ...
	I1007 11:58:10.029720 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: {Name:mkd5eba9e658416af57e8eabc03f99ae857d36e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:10.029972 1179332 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.key ...
	I1007 11:58:10.029989 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.key: {Name:mk202a91991b7ad436782e803f31a5e28222c04d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:10.030088 1179332 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.key.54c551fb
	I1007 11:58:10.030112 1179332 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.crt.54c551fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.58.2]
	I1007 11:58:10.150783 1179332 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.crt.54c551fb ...
	I1007 11:58:10.150819 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.crt.54c551fb: {Name:mk3413ece515a3e252631a8220d4d1b69f55d166 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:10.151019 1179332 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.key.54c551fb ...
	I1007 11:58:10.151034 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.key.54c551fb: {Name:mk8cd7f62b87ce698adba5921237b172cd0edb1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:10.151537 1179332 certs.go:381] copying /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.crt.54c551fb -> /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.crt
	I1007 11:58:10.151633 1179332 certs.go:385] copying /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.key.54c551fb -> /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.key
	I1007 11:58:10.151695 1179332 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/proxy-client.key
	I1007 11:58:10.151718 1179332 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/proxy-client.crt with IP's: []
	I1007 11:58:10.430235 1179332 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/proxy-client.crt ...
	I1007 11:58:10.430268 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/proxy-client.crt: {Name:mk8f6e148054b88adfad1e5ac523492e177e76ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:10.430459 1179332 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/proxy-client.key ...
	I1007 11:58:10.430473 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/proxy-client.key: {Name:mkf3d4920182453ff1b518808d6eded1892e7abc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:10.430669 1179332 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 11:58:10.430713 1179332 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem (1078 bytes)
	I1007 11:58:10.430742 1179332 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/cert.pem (1123 bytes)
	I1007 11:58:10.430780 1179332 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/key.pem (1675 bytes)
	I1007 11:58:10.431409 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 11:58:10.457429 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 11:58:10.482050 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 11:58:10.505960 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 11:58:10.530020 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 11:58:10.558007 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 11:58:10.586586 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 11:58:10.612192 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 11:58:10.636152 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 11:58:10.660694 1179332 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 11:58:10.678152 1179332 ssh_runner.go:195] Run: openssl version
	I1007 11:58:10.683548 1179332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 11:58:10.693067 1179332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:58:10.696527 1179332 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:58 /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:58:10.696593 1179332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:58:10.703526 1179332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 11:58:10.712976 1179332 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 11:58:10.716170 1179332 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 11:58:10.716222 1179332 kubeadm.go:392] StartCluster: {Name:addons-504513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-504513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:58:10.716341 1179332 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 11:58:10.716401 1179332 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 11:58:10.752238 1179332 cri.go:89] found id: ""
	I1007 11:58:10.752337 1179332 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 11:58:10.761373 1179332 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 11:58:10.770277 1179332 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1007 11:58:10.770366 1179332 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 11:58:10.779424 1179332 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 11:58:10.779447 1179332 kubeadm.go:157] found existing configuration files:
	
	I1007 11:58:10.779502 1179332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 11:58:10.788075 1179332 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 11:58:10.788167 1179332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 11:58:10.797375 1179332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 11:58:10.806361 1179332 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 11:58:10.806452 1179332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 11:58:10.815574 1179332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 11:58:10.824601 1179332 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 11:58:10.824696 1179332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 11:58:10.833579 1179332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 11:58:10.842631 1179332 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 11:58:10.842738 1179332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 11:58:10.851412 1179332 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1007 11:58:10.894841 1179332 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 11:58:10.894952 1179332 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 11:58:10.914693 1179332 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1007 11:58:10.914816 1179332 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1007 11:58:10.914878 1179332 kubeadm.go:310] OS: Linux
	I1007 11:58:10.914957 1179332 kubeadm.go:310] CGROUPS_CPU: enabled
	I1007 11:58:10.915033 1179332 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1007 11:58:10.915111 1179332 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1007 11:58:10.915184 1179332 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1007 11:58:10.915264 1179332 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1007 11:58:10.915346 1179332 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1007 11:58:10.915470 1179332 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1007 11:58:10.915564 1179332 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1007 11:58:10.915643 1179332 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1007 11:58:10.974347 1179332 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 11:58:10.974464 1179332 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 11:58:10.974562 1179332 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 11:58:10.984588 1179332 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 11:58:10.989146 1179332 out.go:235]   - Generating certificates and keys ...
	I1007 11:58:10.989244 1179332 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 11:58:10.989316 1179332 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 11:58:11.617333 1179332 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 11:58:12.296673 1179332 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 11:58:12.795585 1179332 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 11:58:13.353531 1179332 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 11:58:13.521063 1179332 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 11:58:13.521298 1179332 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-504513 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1007 11:58:14.162177 1179332 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 11:58:14.162504 1179332 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-504513 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1007 11:58:15.190048 1179332 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 11:58:16.460269 1179332 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 11:58:16.754815 1179332 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 11:58:16.755096 1179332 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 11:58:17.006011 1179332 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 11:58:17.201204 1179332 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 11:58:17.552188 1179332 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 11:58:18.099625 1179332 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 11:58:18.317963 1179332 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 11:58:18.318635 1179332 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 11:58:18.321602 1179332 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 11:58:18.323748 1179332 out.go:235]   - Booting up control plane ...
	I1007 11:58:18.323855 1179332 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 11:58:18.323936 1179332 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 11:58:18.325976 1179332 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 11:58:18.336333 1179332 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 11:58:18.341998 1179332 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 11:58:18.342054 1179332 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 11:58:18.435335 1179332 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 11:58:18.435483 1179332 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 11:58:19.936867 1179332 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501687294s
	I1007 11:58:19.936955 1179332 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 11:58:25.439133 1179332 kubeadm.go:310] [api-check] The API server is healthy after 5.502264659s
	I1007 11:58:25.458306 1179332 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 11:58:25.472488 1179332 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 11:58:25.495394 1179332 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 11:58:25.495589 1179332 kubeadm.go:310] [mark-control-plane] Marking the node addons-504513 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 11:58:25.505568 1179332 kubeadm.go:310] [bootstrap-token] Using token: uqi1ty.cqcawz3fon0l6gz3
	I1007 11:58:25.507364 1179332 out.go:235]   - Configuring RBAC rules ...
	I1007 11:58:25.507504 1179332 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 11:58:25.513092 1179332 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 11:58:25.520552 1179332 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 11:58:25.525754 1179332 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 11:58:25.529302 1179332 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 11:58:25.533941 1179332 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 11:58:25.847753 1179332 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 11:58:26.270242 1179332 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 11:58:26.849575 1179332 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 11:58:26.849609 1179332 kubeadm.go:310] 
	I1007 11:58:26.849734 1179332 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 11:58:26.849743 1179332 kubeadm.go:310] 
	I1007 11:58:26.849828 1179332 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 11:58:26.849835 1179332 kubeadm.go:310] 
	I1007 11:58:26.849860 1179332 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 11:58:26.849942 1179332 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 11:58:26.850010 1179332 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 11:58:26.850021 1179332 kubeadm.go:310] 
	I1007 11:58:26.850099 1179332 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 11:58:26.850109 1179332 kubeadm.go:310] 
	I1007 11:58:26.850167 1179332 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 11:58:26.850174 1179332 kubeadm.go:310] 
	I1007 11:58:26.850226 1179332 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 11:58:26.850300 1179332 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 11:58:26.850368 1179332 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 11:58:26.850372 1179332 kubeadm.go:310] 
	I1007 11:58:26.850455 1179332 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 11:58:26.850531 1179332 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 11:58:26.850536 1179332 kubeadm.go:310] 
	I1007 11:58:26.850619 1179332 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uqi1ty.cqcawz3fon0l6gz3 \
	I1007 11:58:26.850725 1179332 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7db072a6d6df4839e4a7b596f4b08ad30308739d831d243298f5bd971a907272 \
	I1007 11:58:26.850749 1179332 kubeadm.go:310] 	--control-plane 
	I1007 11:58:26.850753 1179332 kubeadm.go:310] 
	I1007 11:58:26.850837 1179332 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 11:58:26.850842 1179332 kubeadm.go:310] 
	I1007 11:58:26.850925 1179332 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uqi1ty.cqcawz3fon0l6gz3 \
	I1007 11:58:26.851027 1179332 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7db072a6d6df4839e4a7b596f4b08ad30308739d831d243298f5bd971a907272 
	I1007 11:58:26.851908 1179332 kubeadm.go:310] W1007 11:58:10.891387    1174 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 11:58:26.852216 1179332 kubeadm.go:310] W1007 11:58:10.892285    1174 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 11:58:26.852440 1179332 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1007 11:58:26.852554 1179332 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 11:58:26.852581 1179332 cni.go:84] Creating CNI manager for ""
	I1007 11:58:26.852589 1179332 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 11:58:26.855440 1179332 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 11:58:26.857146 1179332 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 11:58:26.861213 1179332 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 11:58:26.861235 1179332 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 11:58:26.881581 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 11:58:27.181388 1179332 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 11:58:27.181537 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:58:27.181618 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-504513 minikube.k8s.io/updated_at=2024_10_07T11_58_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=addons-504513 minikube.k8s.io/primary=true
	I1007 11:58:27.337392 1179332 ops.go:34] apiserver oom_adj: -16
	I1007 11:58:27.337531 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:58:27.838553 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:58:28.337723 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:58:28.838592 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:58:29.337822 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:58:29.837702 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:58:30.338326 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:58:30.838205 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:58:30.929633 1179332 kubeadm.go:1113] duration metric: took 3.748139924s to wait for elevateKubeSystemPrivileges
	I1007 11:58:30.929672 1179332 kubeadm.go:394] duration metric: took 20.213455358s to StartCluster
	I1007 11:58:30.929689 1179332 settings.go:142] acquiring lock: {Name:mk942b9f169f258985b7aaeeac5d38deaf461542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:30.929807 1179332 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19763-1173066/kubeconfig
	I1007 11:58:30.930181 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/kubeconfig: {Name:mkfc1e9493ee5c91f2837c31acce39f4935ee46c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:30.930772 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 11:58:30.930787 1179332 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 11:58:30.931071 1179332 config.go:182] Loaded profile config "addons-504513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:58:30.931118 1179332 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1007 11:58:30.931234 1179332 addons.go:69] Setting yakd=true in profile "addons-504513"
	I1007 11:58:30.931250 1179332 addons.go:234] Setting addon yakd=true in "addons-504513"
	I1007 11:58:30.931290 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:30.931843 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.932489 1179332 addons.go:69] Setting cloud-spanner=true in profile "addons-504513"
	I1007 11:58:30.932514 1179332 addons.go:234] Setting addon cloud-spanner=true in "addons-504513"
	I1007 11:58:30.932548 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:30.933054 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.935907 1179332 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-504513"
	I1007 11:58:30.936054 1179332 out.go:177] * Verifying Kubernetes components...
	I1007 11:58:30.936142 1179332 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-504513"
	I1007 11:58:30.936350 1179332 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-504513"
	I1007 11:58:30.936365 1179332 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-504513"
	I1007 11:58:30.936385 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:30.936838 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.936192 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:30.938054 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.940561 1179332 addons.go:69] Setting default-storageclass=true in profile "addons-504513"
	I1007 11:58:30.940586 1179332 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-504513"
	I1007 11:58:30.940865 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.941170 1179332 addons.go:69] Setting registry=true in profile "addons-504513"
	I1007 11:58:30.941206 1179332 addons.go:234] Setting addon registry=true in "addons-504513"
	I1007 11:58:30.941270 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:30.941794 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.952371 1179332 addons.go:69] Setting gcp-auth=true in profile "addons-504513"
	I1007 11:58:30.958730 1179332 mustload.go:65] Loading cluster: addons-504513
	I1007 11:58:30.958994 1179332 config.go:182] Loaded profile config "addons-504513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:58:30.959313 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.960325 1179332 addons.go:69] Setting ingress=true in profile "addons-504513"
	I1007 11:58:30.960375 1179332 addons.go:234] Setting addon ingress=true in "addons-504513"
	I1007 11:58:30.960431 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:30.961019 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.952452 1179332 addons.go:69] Setting storage-provisioner=true in profile "addons-504513"
	I1007 11:58:30.992491 1179332 addons.go:234] Setting addon storage-provisioner=true in "addons-504513"
	I1007 11:58:30.992559 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:30.993078 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.993272 1179332 addons.go:69] Setting ingress-dns=true in profile "addons-504513"
	I1007 11:58:30.993309 1179332 addons.go:234] Setting addon ingress-dns=true in "addons-504513"
	I1007 11:58:30.993365 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:30.993837 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.952462 1179332 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-504513"
	I1007 11:58:31.011896 1179332 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-504513"
	I1007 11:58:31.012302 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:31.015252 1179332 addons.go:69] Setting inspektor-gadget=true in profile "addons-504513"
	I1007 11:58:31.015344 1179332 addons.go:234] Setting addon inspektor-gadget=true in "addons-504513"
	I1007 11:58:31.015414 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:31.015936 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.952471 1179332 addons.go:69] Setting volcano=true in profile "addons-504513"
	I1007 11:58:31.025798 1179332 addons.go:234] Setting addon volcano=true in "addons-504513"
	I1007 11:58:31.025873 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:31.026488 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:31.050036 1179332 addons.go:69] Setting metrics-server=true in profile "addons-504513"
	I1007 11:58:31.050110 1179332 addons.go:234] Setting addon metrics-server=true in "addons-504513"
	I1007 11:58:31.050178 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:31.050662 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.952477 1179332 addons.go:69] Setting volumesnapshots=true in profile "addons-504513"
	I1007 11:58:31.050959 1179332 addons.go:234] Setting addon volumesnapshots=true in "addons-504513"
	I1007 11:58:31.051005 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:31.052022 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.952590 1179332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:58:31.119348 1179332 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1007 11:58:31.121605 1179332 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1007 11:58:31.121721 1179332 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1007 11:58:31.123791 1179332 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1007 11:58:31.124038 1179332 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 11:58:31.124052 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1007 11:58:31.124125 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.129976 1179332 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1007 11:58:31.132493 1179332 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1007 11:58:31.134665 1179332 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1007 11:58:31.137228 1179332 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1007 11:58:31.139194 1179332 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1007 11:58:31.139296 1179332 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1007 11:58:31.139385 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.142375 1179332 addons.go:234] Setting addon default-storageclass=true in "addons-504513"
	I1007 11:58:31.142416 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:31.142811 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:31.164531 1179332 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1007 11:58:31.166935 1179332 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1007 11:58:31.166959 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1007 11:58:31.167024 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.189500 1179332 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1007 11:58:31.191821 1179332 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1007 11:58:31.193604 1179332 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1007 11:58:31.193644 1179332 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1007 11:58:31.193710 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.230841 1179332 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 11:58:31.231107 1179332 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1007 11:58:31.248588 1179332 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 11:58:31.248771 1179332 host.go:66] Checking if "addons-504513" exists ...
	W1007 11:58:31.254306 1179332 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1007 11:58:31.254463 1179332 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1007 11:58:31.254630 1179332 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 11:58:31.254662 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 11:58:31.254766 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.256221 1179332 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1007 11:58:31.256241 1179332 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1007 11:58:31.256357 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.273582 1179332 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1007 11:58:31.273871 1179332 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1007 11:58:31.276132 1179332 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1007 11:58:31.276154 1179332 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1007 11:58:31.276218 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.276555 1179332 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 11:58:31.276608 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1007 11:58:31.276689 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.290640 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 11:58:31.295851 1179332 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 11:58:31.308046 1179332 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 11:58:31.308067 1179332 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 11:58:31.308130 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.296871 1179332 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-504513"
	I1007 11:58:31.308394 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:31.308815 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:31.319771 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.324314 1179332 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1007 11:58:31.326068 1179332 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 11:58:31.326089 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1007 11:58:31.326162 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.339639 1179332 out.go:177]   - Using image docker.io/registry:2.8.3
	I1007 11:58:31.344117 1179332 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1007 11:58:31.344154 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1007 11:58:31.344221 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.360374 1179332 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1007 11:58:31.367567 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.368396 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.369160 1179332 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 11:58:31.369174 1179332 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 11:58:31.369242 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.390740 1179332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:58:31.402771 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.404526 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.430160 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.434057 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.446760 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.476098 1179332 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1007 11:58:31.480363 1179332 out.go:177]   - Using image docker.io/busybox:stable
	I1007 11:58:31.482347 1179332 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 11:58:31.482370 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1007 11:58:31.482436 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.492433 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.506821 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.508579 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	W1007 11:58:31.510433 1179332 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1007 11:58:31.510463 1179332 retry.go:31] will retry after 170.543342ms: ssh: handshake failed: EOF
	I1007 11:58:31.525998 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.543240 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.742399 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 11:58:31.811567 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1007 11:58:31.832357 1179332 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1007 11:58:31.832382 1179332 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1007 11:58:31.876508 1179332 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1007 11:58:31.876540 1179332 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1007 11:58:31.880167 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 11:58:31.891634 1179332 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1007 11:58:31.891662 1179332 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1007 11:58:31.926486 1179332 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1007 11:58:31.926513 1179332 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1007 11:58:31.935767 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 11:58:31.952498 1179332 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 11:58:31.952523 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1007 11:58:31.955318 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 11:58:31.975007 1179332 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1007 11:58:31.975032 1179332 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1007 11:58:31.999307 1179332 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1007 11:58:31.999335 1179332 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1007 11:58:32.002701 1179332 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1007 11:58:32.002731 1179332 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1007 11:58:32.005384 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 11:58:32.080507 1179332 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1007 11:58:32.080531 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1007 11:58:32.106726 1179332 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1007 11:58:32.106753 1179332 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1007 11:58:32.111469 1179332 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1007 11:58:32.111496 1179332 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1007 11:58:32.118064 1179332 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 11:58:32.118090 1179332 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 11:58:32.134210 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 11:58:32.142893 1179332 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1007 11:58:32.142918 1179332 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1007 11:58:32.156865 1179332 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1007 11:58:32.156892 1179332 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1007 11:58:32.209179 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1007 11:58:32.267235 1179332 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1007 11:58:32.267262 1179332 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1007 11:58:32.273457 1179332 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 11:58:32.273482 1179332 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 11:58:32.276669 1179332 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1007 11:58:32.276694 1179332 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1007 11:58:32.291035 1179332 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1007 11:58:32.291058 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1007 11:58:32.379596 1179332 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1007 11:58:32.379623 1179332 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1007 11:58:32.413887 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1007 11:58:32.435105 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 11:58:32.454246 1179332 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1007 11:58:32.454275 1179332 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1007 11:58:32.485200 1179332 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1007 11:58:32.485226 1179332 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1007 11:58:32.557627 1179332 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 11:58:32.557649 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1007 11:58:32.611259 1179332 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1007 11:58:32.611297 1179332 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1007 11:58:32.663666 1179332 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1007 11:58:32.663692 1179332 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1007 11:58:32.721722 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 11:58:32.783073 1179332 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1007 11:58:32.783099 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1007 11:58:32.794202 1179332 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1007 11:58:32.794234 1179332 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1007 11:58:32.865301 1179332 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1007 11:58:32.865326 1179332 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1007 11:58:32.883450 1179332 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1007 11:58:32.883476 1179332 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1007 11:58:32.948003 1179332 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1007 11:58:32.948030 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1007 11:58:32.953231 1179332 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 11:58:32.953304 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1007 11:58:33.047666 1179332 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1007 11:58:33.047753 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1007 11:58:33.068957 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 11:58:33.142163 1179332 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 11:58:33.142236 1179332 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1007 11:58:33.268917 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 11:58:33.314257 1179332 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.023582098s)
	I1007 11:58:33.314337 1179332 start.go:971] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1007 11:58:33.315466 1179332 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.924701892s)
	I1007 11:58:33.316597 1179332 node_ready.go:35] waiting up to 6m0s for node "addons-504513" to be "Ready" ...
	I1007 11:58:34.081665 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.33922698s)
	I1007 11:58:34.400788 1179332 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-504513" context rescaled to 1 replicas
	I1007 11:58:35.555516 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:35.592566 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.78096353s)
	I1007 11:58:35.887847 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.00764492s)
	I1007 11:58:36.326812 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.39100974s)
	I1007 11:58:36.327085 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.371742002s)
	I1007 11:58:36.355678 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.350256217s)
	W1007 11:58:36.441851 1179332 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1007 11:58:37.293472 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.15922741s)
	I1007 11:58:37.293663 1179332 addons.go:475] Verifying addon ingress=true in "addons-504513"
	I1007 11:58:37.293776 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.858647928s)
	I1007 11:58:37.293824 1179332 addons.go:475] Verifying addon metrics-server=true in "addons-504513"
	I1007 11:58:37.293544 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.879627558s)
	I1007 11:58:37.293499 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.084205847s)
	I1007 11:58:37.294259 1179332 addons.go:475] Verifying addon registry=true in "addons-504513"
	I1007 11:58:37.294630 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.572870949s)
	W1007 11:58:37.295655 1179332 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 11:58:37.295681 1179332 retry.go:31] will retry after 297.35536ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 11:58:37.294717 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.225655053s)
	I1007 11:58:37.296285 1179332 out.go:177] * Verifying ingress addon...
	I1007 11:58:37.297738 1179332 out.go:177] * Verifying registry addon...
	I1007 11:58:37.297806 1179332 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-504513 service yakd-dashboard -n yakd-dashboard
	
	I1007 11:58:37.300221 1179332 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1007 11:58:37.302832 1179332 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1007 11:58:37.314673 1179332 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1007 11:58:37.314763 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:37.333394 1179332 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 11:58:37.333415 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:37.587779 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.318743695s)
	I1007 11:58:37.587866 1179332 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-504513"
	I1007 11:58:37.589694 1179332 out.go:177] * Verifying csi-hostpath-driver addon...
	I1007 11:58:37.592384 1179332 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1007 11:58:37.593422 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 11:58:37.611585 1179332 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 11:58:37.611608 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:37.811161 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:37.812196 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:37.821342 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:38.097315 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:38.306192 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:38.306472 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:38.597016 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:38.806112 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:38.807082 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:39.096603 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:39.305595 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:39.307371 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:39.596798 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:39.805719 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:39.807022 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:39.821505 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:40.122417 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:40.305597 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:40.310469 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:40.572647 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.979185223s)
	I1007 11:58:40.597230 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:40.806848 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:40.809193 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:41.097097 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:41.157039 1179332 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1007 11:58:41.157169 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:41.180600 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:41.306019 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:41.307330 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:41.311356 1179332 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1007 11:58:41.366331 1179332 addons.go:234] Setting addon gcp-auth=true in "addons-504513"
	I1007 11:58:41.366408 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:41.366914 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:41.400961 1179332 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1007 11:58:41.401017 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:41.422326 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:41.520733 1179332 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1007 11:58:41.522694 1179332 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 11:58:41.524833 1179332 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1007 11:58:41.524856 1179332 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1007 11:58:41.568803 1179332 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1007 11:58:41.568831 1179332 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1007 11:58:41.588730 1179332 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 11:58:41.588755 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1007 11:58:41.596484 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:41.612136 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 11:58:41.809057 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:41.809755 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:41.835134 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:42.098130 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:42.330219 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:42.331648 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:42.345920 1179332 addons.go:475] Verifying addon gcp-auth=true in "addons-504513"
	I1007 11:58:42.347766 1179332 out.go:177] * Verifying gcp-auth addon...
	I1007 11:58:42.350377 1179332 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1007 11:58:42.364423 1179332 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1007 11:58:42.364504 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:42.597297 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:42.807656 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:42.808496 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:42.854519 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:43.096353 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:43.304237 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:43.306394 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:43.355366 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:43.597490 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:43.804704 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:43.809388 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:43.854264 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:44.096508 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:44.305012 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:44.306679 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:44.320528 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:44.354658 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:44.595646 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:44.805771 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:44.807217 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:44.853535 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:45.096789 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:45.307975 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:45.308779 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:45.354438 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:45.595963 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:45.805020 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:45.807439 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:45.853644 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:46.095782 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:46.304570 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:46.305999 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:46.354055 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:46.596169 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:46.806446 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:46.807314 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:46.820576 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:46.854222 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:47.096143 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:47.304203 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:47.306596 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:47.353577 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:47.596984 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:47.805258 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:47.806744 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:47.854092 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:48.096297 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:48.304315 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:48.305969 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:48.354665 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:48.595903 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:48.804081 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:48.805813 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:48.854202 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:49.096345 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:49.304466 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:49.305875 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:49.319524 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:49.354050 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:49.596769 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:49.803947 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:49.806402 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:49.854077 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:50.096783 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:50.304757 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:50.307088 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:50.353772 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:50.595883 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:50.803996 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:50.806217 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:50.854045 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:51.096322 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:51.304558 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:51.307071 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:51.319909 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:51.353504 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:51.596735 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:51.806034 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:51.807394 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:51.854213 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:52.096363 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:52.304779 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:52.306125 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:52.353761 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:52.596319 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:52.803954 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:52.806396 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:52.854322 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:53.096441 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:53.304439 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:53.305799 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:53.320768 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:53.353895 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:53.596301 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:53.804558 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:53.805987 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:53.853831 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:54.096311 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:54.304574 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:54.305882 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:54.367534 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:54.596759 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:54.804702 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:54.806242 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:54.853734 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:55.096873 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:55.305409 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:55.306999 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:55.354523 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:55.596420 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:55.804955 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:55.806456 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:55.820388 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:55.854242 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:56.095732 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:56.306424 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:56.306070 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:56.354044 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:56.596378 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:56.804439 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:56.807929 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:56.854289 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:57.096042 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:57.304586 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:57.307116 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:57.354087 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:57.596479 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:57.804158 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:57.806665 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:57.853741 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:58.096416 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:58.304724 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:58.306481 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:58.320645 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:58.353697 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:58.596068 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:58.805098 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:58.806458 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:58.853462 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:59.096436 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:59.304476 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:59.305585 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:59.353851 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:59.596424 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:59.804432 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:59.806866 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:59.853540 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:00.104113 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:00.325407 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:00.329695 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:00.336672 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:59:00.363197 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:00.596428 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:00.805859 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:00.807279 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:00.853627 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:01.095787 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:01.304027 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:01.305580 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:01.353954 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:01.596674 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:01.805162 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:01.806067 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:01.854074 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:02.096626 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:02.304926 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:02.307258 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:02.354205 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:02.595689 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:02.804738 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:02.806137 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:02.820077 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:59:02.854135 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:03.097340 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:03.304783 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:03.306225 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:03.354955 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:03.595578 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:03.804720 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:03.806580 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:03.854018 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:04.096915 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:04.304356 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:04.306567 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:04.353944 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:04.596360 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:04.805577 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:04.807641 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:04.820447 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:59:04.853509 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:05.096411 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:05.306040 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:05.307413 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:05.354356 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:05.595625 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:05.806757 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:05.806799 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:05.854385 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:06.096585 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:06.304358 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:06.305676 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:06.353570 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:06.595479 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:06.805111 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:06.806566 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:06.820564 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:59:06.853542 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:07.096123 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:07.304537 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:07.306952 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:07.354309 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:07.595755 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:07.804767 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:07.806297 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:07.854314 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:08.096620 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:08.305270 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:08.306440 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:08.354597 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:08.595627 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:08.807184 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:08.808269 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:08.853783 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:09.096587 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:09.304659 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:09.306196 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:09.320190 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:59:09.354277 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:09.596560 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:09.805116 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:09.807312 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:09.853886 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:10.096981 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:10.305994 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:10.307383 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:10.353964 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:10.596500 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:10.805764 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:10.806525 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:10.854364 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:11.096060 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:11.305506 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:11.306916 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:11.354698 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:11.595692 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:11.804946 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:11.806602 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:11.820464 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:59:11.853692 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:12.096513 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:12.304857 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:12.306265 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:12.353832 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:12.595908 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:12.805006 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:12.806411 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:12.854146 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:13.096225 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:13.304058 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:13.306811 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:13.354295 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:13.596547 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:13.805255 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:13.806741 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:13.820545 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:59:13.853775 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:14.096408 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:14.304227 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:14.306549 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:14.354522 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:14.596714 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:14.804279 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:14.805795 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:14.853881 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:15.096497 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:15.304235 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:15.305768 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:15.354481 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:15.596582 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:15.805639 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:15.806857 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:15.853212 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:16.109060 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:16.329660 1179332 node_ready.go:49] node "addons-504513" has status "Ready":"True"
	I1007 11:59:16.329687 1179332 node_ready.go:38] duration metric: took 43.013030999s for node "addons-504513" to be "Ready" ...
	I1007 11:59:16.329699 1179332 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 11:59:16.368573 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:16.381154 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:16.383174 1179332 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 11:59:16.383203 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:16.430331 1179332 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g27sx" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:16.608659 1179332 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 11:59:16.608686 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:16.828405 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:16.829336 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:16.905802 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:17.101498 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:17.307963 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:17.310654 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:17.405036 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:17.597955 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:17.808972 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:17.810771 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:17.909811 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:17.937175 1179332 pod_ready.go:93] pod "coredns-7c65d6cfc9-g27sx" in "kube-system" namespace has status "Ready":"True"
	I1007 11:59:17.937252 1179332 pod_ready.go:82] duration metric: took 1.506886583s for pod "coredns-7c65d6cfc9-g27sx" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:17.937341 1179332 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-504513" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:17.944265 1179332 pod_ready.go:93] pod "etcd-addons-504513" in "kube-system" namespace has status "Ready":"True"
	I1007 11:59:17.944338 1179332 pod_ready.go:82] duration metric: took 6.964823ms for pod "etcd-addons-504513" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:17.944370 1179332 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-504513" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:17.953927 1179332 pod_ready.go:93] pod "kube-apiserver-addons-504513" in "kube-system" namespace has status "Ready":"True"
	I1007 11:59:17.954008 1179332 pod_ready.go:82] duration metric: took 9.615342ms for pod "kube-apiserver-addons-504513" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:17.954039 1179332 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-504513" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:17.963066 1179332 pod_ready.go:93] pod "kube-controller-manager-addons-504513" in "kube-system" namespace has status "Ready":"True"
	I1007 11:59:17.963145 1179332 pod_ready.go:82] duration metric: took 9.083513ms for pod "kube-controller-manager-addons-504513" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:17.963187 1179332 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j4dwf" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:17.972188 1179332 pod_ready.go:93] pod "kube-proxy-j4dwf" in "kube-system" namespace has status "Ready":"True"
	I1007 11:59:17.972292 1179332 pod_ready.go:82] duration metric: took 9.066414ms for pod "kube-proxy-j4dwf" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:17.972322 1179332 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-504513" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:18.100803 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:18.305138 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:18.306945 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:18.334025 1179332 pod_ready.go:93] pod "kube-scheduler-addons-504513" in "kube-system" namespace has status "Ready":"True"
	I1007 11:59:18.334094 1179332 pod_ready.go:82] duration metric: took 361.719532ms for pod "kube-scheduler-addons-504513" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:18.334124 1179332 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:18.353955 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:18.601572 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:18.805701 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:18.807732 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:18.854024 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:19.098453 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:19.306975 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:19.307829 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:19.405855 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:19.598060 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:19.806097 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:19.806773 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:19.853892 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:20.097683 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:20.305307 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:20.307745 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:20.340388 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:20.353482 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:20.597548 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:20.805503 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:20.806616 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:20.854067 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:21.098112 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:21.305280 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:21.308782 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:21.353403 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:21.597886 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:21.805972 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:21.807858 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:21.854404 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:22.097626 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:22.306599 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:22.307677 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:22.341091 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:22.353793 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:22.597475 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:22.806207 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:22.807180 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:22.854438 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:23.098366 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:23.305524 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:23.308615 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:23.354280 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:23.597255 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:23.805798 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:23.808173 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:23.874900 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:24.099403 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:24.307628 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:24.325939 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:24.349546 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:24.354353 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:24.596872 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:24.806458 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:24.809789 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:24.856398 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:25.098007 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:25.333845 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:25.338244 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:25.367514 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:25.597308 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:25.806666 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:25.810582 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:25.856465 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:26.097976 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:26.306137 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:26.309228 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:26.354908 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:26.596482 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:26.807728 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:26.808653 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:26.841548 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:26.906082 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:27.099648 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:27.304729 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:27.307767 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:27.353481 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:27.597464 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:27.807291 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:27.807671 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:27.853774 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:28.097330 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:28.305866 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:28.307882 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:28.354050 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:28.598095 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:28.805904 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:28.808719 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:28.854376 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:29.097804 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:29.305332 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:29.307990 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:29.341233 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:29.404717 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:29.597324 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:29.807962 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:29.908429 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:29.908761 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:30.098286 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:30.306498 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:30.307873 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:30.354621 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:30.597589 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:30.805090 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:30.807070 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:30.853911 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:31.101339 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:31.306124 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:31.309521 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:31.355514 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:31.598380 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:31.804969 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:31.807526 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:31.844061 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:31.857648 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:32.097825 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:32.305500 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:32.308813 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:32.358530 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:32.597693 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:32.815480 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:32.818006 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:32.853624 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:33.097734 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:33.306620 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:33.307350 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:33.353970 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:33.599093 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:33.808834 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:33.809351 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:33.853742 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:34.098729 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:34.307614 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:34.310776 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:34.343763 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:34.354255 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:34.598521 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:34.806015 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:34.808625 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:34.854672 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:35.105176 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:35.313916 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:35.315378 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:35.412687 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:35.597882 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:35.806576 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:35.806784 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:35.854913 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:36.097213 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:36.304891 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:36.306952 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:36.344390 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:36.354517 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:36.598191 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:36.806798 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:36.807799 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:36.853873 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:37.097343 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:37.305533 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:37.308315 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:37.354539 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:37.598610 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:37.819078 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:37.830116 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:37.911117 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:38.098195 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:38.309065 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:38.310300 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:38.354316 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:38.598877 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:38.804775 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:38.806605 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:38.844506 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:38.853982 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:39.103636 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:39.308732 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:39.310253 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:39.355960 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:39.601829 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:39.807614 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:39.813126 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:39.858263 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:40.099145 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:40.306285 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:40.308632 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:40.354197 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:40.597903 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:40.807440 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:40.809176 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:40.854620 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:41.097510 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:41.313520 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:41.314842 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:41.361953 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:41.377518 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:41.598129 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:41.843526 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:41.847466 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:41.855618 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:42.105720 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:42.305994 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:42.309609 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:42.354653 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:42.598666 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:42.808504 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:42.810116 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:42.854370 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:43.097994 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:43.307105 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:43.310168 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:43.354962 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:43.608692 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:43.806712 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:43.809356 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:43.845893 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:43.853868 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:44.097786 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:44.305991 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:44.306913 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:44.354060 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:44.599509 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:44.807114 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:44.807416 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:44.854364 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:45.104501 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:45.309360 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:45.314305 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:45.355099 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:45.598406 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:45.806497 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:45.809216 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:45.855712 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:46.098288 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:46.305212 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:46.306616 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:46.339983 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:46.354806 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:46.597515 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:46.811396 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:46.813450 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:46.854369 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:47.098447 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:47.311173 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:47.311242 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:47.355599 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:47.605734 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:47.808708 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:47.810086 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:47.858088 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:48.102608 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:48.306261 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:48.311629 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:48.348948 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:48.360080 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:48.600558 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:48.807729 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:48.809783 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:48.854902 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:49.100040 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:49.306838 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:49.308893 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:49.354246 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:49.597221 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:49.805247 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:49.807724 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:49.854649 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:50.098770 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:50.305655 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:50.308697 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:50.354114 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:50.598787 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:50.807577 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:50.810135 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:50.841013 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:50.905774 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:51.097337 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:51.305316 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:51.306798 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:51.354140 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:51.597517 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:51.805341 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:51.807012 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:51.853701 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:52.096993 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:52.305259 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:52.307220 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:52.353971 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:52.597658 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:52.806983 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:52.807599 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:52.844702 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:52.854151 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:53.097563 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:53.305455 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:53.309355 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:53.354051 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:53.600008 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:53.805785 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:53.807015 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:53.854111 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:54.097249 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:54.304510 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:54.306414 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:54.357063 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:54.602760 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:54.806660 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:54.807773 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:54.853840 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:55.097675 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:55.304890 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:55.306999 1179332 kapi.go:107] duration metric: took 1m18.004166252s to wait for kubernetes.io/minikube-addons=registry ...
	I1007 11:59:55.340917 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:55.354017 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:55.598215 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:55.804986 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:55.854244 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:56.098431 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:56.305427 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:56.354159 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:56.600767 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:56.811443 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:56.862427 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:57.102668 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:57.305837 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:57.345611 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:57.355390 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:57.598734 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:57.805156 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:57.854796 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:58.099847 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:58.305517 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:58.359560 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:58.598857 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:58.805468 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:58.854475 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:59.098206 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:59.307199 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:59.354076 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:59.598893 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:59.831817 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:59.851552 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:59.923200 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:00.105985 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:00.428918 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:00.430459 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:00.721068 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:00.817839 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:00.881310 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:01.134010 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:01.313756 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:01.355594 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:01.600772 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:01.806813 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:01.855897 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:02.104648 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:02.305878 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:02.345882 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:02.355700 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:02.600447 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:02.805401 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:02.854250 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:03.099516 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:03.305221 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:03.354854 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:03.600596 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:03.808322 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:03.858236 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:04.106491 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:04.308134 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:04.355414 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:04.598817 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:04.804849 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:04.841234 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:04.854179 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:05.098373 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:05.305914 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:05.356235 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:05.597320 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:05.804883 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:05.853549 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:06.098731 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:06.305164 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:06.355088 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:06.597237 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:06.806198 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:06.854120 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:07.098511 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:07.308850 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:07.341090 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:07.407654 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:07.597521 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:07.805423 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:07.854933 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:08.097079 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:08.305158 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:08.354247 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:08.598106 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:08.806207 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:08.854306 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:09.102629 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:09.305638 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:09.341234 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:09.354606 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:09.597963 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:09.806211 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:09.853961 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:10.099590 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:10.305218 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:10.355568 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:10.597674 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:10.806006 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:10.854483 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:11.098648 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:11.306155 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:11.341367 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:11.354951 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:11.601326 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:11.806265 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:11.855036 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:12.099419 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:12.306952 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:12.354951 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:12.597617 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:12.805504 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:12.854270 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:13.097658 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:13.305465 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:13.341601 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:13.358427 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:13.597546 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:13.807591 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:13.854606 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:14.098580 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:14.306620 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:14.354200 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:14.605246 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:14.804372 1179332 kapi.go:107] duration metric: took 1m37.504149213s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1007 12:00:14.854382 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:15.097402 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:15.349074 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:15.355551 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:15.597607 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:15.854124 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:16.099037 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:16.355894 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:16.599709 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:16.855501 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:17.099114 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:17.355680 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:17.598191 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:17.840929 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:17.854801 1179332 kapi.go:107] duration metric: took 1m35.504424875s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1007 12:00:17.857052 1179332 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-504513 cluster.
	I1007 12:00:17.858947 1179332 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1007 12:00:17.860660 1179332 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1007 12:00:18.097169 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:18.597821 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:19.099496 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:19.603614 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:20.098146 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:20.339647 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:20.598251 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:21.098393 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:21.598548 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:22.102335 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:22.340772 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:22.597775 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:23.098251 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:23.597657 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:24.097449 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:24.597023 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:24.840301 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:25.098058 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:25.598421 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:26.099583 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:26.598798 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:26.840595 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:27.098901 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:27.598120 1179332 kapi.go:107] duration metric: took 1m50.005731237s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1007 12:00:27.600300 1179332 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, storage-provisioner-rancher, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1007 12:00:27.602352 1179332 addons.go:510] duration metric: took 1m56.67122167s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner storage-provisioner-rancher metrics-server inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1007 12:00:29.341417 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:31.841016 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:34.339904 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:36.340946 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:38.839846 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:40.840732 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:42.841119 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:44.841498 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:47.340724 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:49.341077 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:51.840437 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:53.840817 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:56.340698 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:56.840837 1179332 pod_ready.go:93] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"True"
	I1007 12:00:56.840864 1179332 pod_ready.go:82] duration metric: took 1m38.50671883s for pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace to be "Ready" ...
	I1007 12:00:56.840876 1179332 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-zfrr9" in "kube-system" namespace to be "Ready" ...
	I1007 12:00:56.846132 1179332 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-zfrr9" in "kube-system" namespace has status "Ready":"True"
	I1007 12:00:56.846155 1179332 pod_ready.go:82] duration metric: took 5.270992ms for pod "nvidia-device-plugin-daemonset-zfrr9" in "kube-system" namespace to be "Ready" ...
	I1007 12:00:56.846177 1179332 pod_ready.go:39] duration metric: took 1m40.516457222s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:00:56.846193 1179332 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:00:56.846228 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 12:00:56.846290 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 12:00:56.907066 1179332 cri.go:89] found id: "2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea"
	I1007 12:00:56.907098 1179332 cri.go:89] found id: ""
	I1007 12:00:56.907107 1179332 logs.go:282] 1 containers: [2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea]
	I1007 12:00:56.907164 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:00:56.910957 1179332 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 12:00:56.911028 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 12:00:56.949186 1179332 cri.go:89] found id: "ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43"
	I1007 12:00:56.949208 1179332 cri.go:89] found id: ""
	I1007 12:00:56.949216 1179332 logs.go:282] 1 containers: [ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43]
	I1007 12:00:56.949275 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:00:56.952774 1179332 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 12:00:56.952859 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 12:00:56.994569 1179332 cri.go:89] found id: "c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc"
	I1007 12:00:56.994592 1179332 cri.go:89] found id: ""
	I1007 12:00:56.994600 1179332 logs.go:282] 1 containers: [c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc]
	I1007 12:00:56.994656 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:00:56.998061 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 12:00:56.998141 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 12:00:57.044125 1179332 cri.go:89] found id: "cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8"
	I1007 12:00:57.044147 1179332 cri.go:89] found id: ""
	I1007 12:00:57.044154 1179332 logs.go:282] 1 containers: [cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8]
	I1007 12:00:57.044220 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:00:57.048304 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 12:00:57.048431 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 12:00:57.098294 1179332 cri.go:89] found id: "fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916"
	I1007 12:00:57.098324 1179332 cri.go:89] found id: ""
	I1007 12:00:57.098333 1179332 logs.go:282] 1 containers: [fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916]
	I1007 12:00:57.098395 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:00:57.102286 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 12:00:57.102375 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 12:00:57.144417 1179332 cri.go:89] found id: "09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a"
	I1007 12:00:57.144450 1179332 cri.go:89] found id: ""
	I1007 12:00:57.144459 1179332 logs.go:282] 1 containers: [09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a]
	I1007 12:00:57.144560 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:00:57.148407 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 12:00:57.148507 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 12:00:57.187777 1179332 cri.go:89] found id: "82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4"
	I1007 12:00:57.187801 1179332 cri.go:89] found id: ""
	I1007 12:00:57.187810 1179332 logs.go:282] 1 containers: [82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4]
	I1007 12:00:57.187867 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:00:57.191240 1179332 logs.go:123] Gathering logs for kubelet ...
	I1007 12:00:57.191266 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 12:00:57.261592 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: W1007 11:58:36.789481    1488 reflector.go:561] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-504513' and this object
	W1007 12:00:57.261930 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: E1007 11:58:36.789546    1488 reflector.go:158] "Unhandled Error" err="object-\"gadget\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gadget\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:00:57.262104 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: W1007 11:58:36.789481    1488 reflector.go:561] object-"gadget"/"gadget": failed to list *v1.ConfigMap: configmaps "gadget" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-504513' and this object
	W1007 12:00:57.262310 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: E1007 11:58:36.789593    1488 reflector.go:158] "Unhandled Error" err="object-\"gadget\"/\"gadget\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"gadget\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gadget\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:00:57.268166 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.078980    1488 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-504513' and this object
	W1007 12:00:57.268413 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079028    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:00:57.268577 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.079441    1488 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-504513" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-504513' and this object
	W1007 12:00:57.268787 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079474    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:00:57.269539 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.099084    1488 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-504513' and this object
	W1007 12:00:57.269763 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.099134    1488 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	I1007 12:00:57.306965 1179332 logs.go:123] Gathering logs for kube-apiserver [2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea] ...
	I1007 12:00:57.306998 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea"
	I1007 12:00:57.364251 1179332 logs.go:123] Gathering logs for kube-scheduler [cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8] ...
	I1007 12:00:57.364304 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8"
	I1007 12:00:57.411705 1179332 logs.go:123] Gathering logs for kube-controller-manager [09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a] ...
	I1007 12:00:57.411736 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a"
	I1007 12:00:57.489405 1179332 logs.go:123] Gathering logs for container status ...
	I1007 12:00:57.489448 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 12:00:57.544945 1179332 logs.go:123] Gathering logs for dmesg ...
	I1007 12:00:57.544986 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 12:00:57.563506 1179332 logs.go:123] Gathering logs for describe nodes ...
	I1007 12:00:57.563535 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 12:00:57.766411 1179332 logs.go:123] Gathering logs for etcd [ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43] ...
	I1007 12:00:57.766440 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43"
	I1007 12:00:57.817311 1179332 logs.go:123] Gathering logs for coredns [c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc] ...
	I1007 12:00:57.817350 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc"
	I1007 12:00:57.865138 1179332 logs.go:123] Gathering logs for kube-proxy [fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916] ...
	I1007 12:00:57.865171 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916"
	I1007 12:00:57.905184 1179332 logs.go:123] Gathering logs for kindnet [82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4] ...
	I1007 12:00:57.905214 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4"
	I1007 12:00:57.952799 1179332 logs.go:123] Gathering logs for CRI-O ...
	I1007 12:00:57.952830 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 12:00:58.045323 1179332 out.go:358] Setting ErrFile to fd 2...
	I1007 12:00:58.045356 1179332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 12:00:58.045434 1179332 out.go:270] X Problems detected in kubelet:
	W1007 12:00:58.045448 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079028    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:00:58.045457 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.079441    1488 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-504513" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-504513' and this object
	W1007 12:00:58.045487 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079474    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:00:58.045496 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.099084    1488 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-504513' and this object
	W1007 12:00:58.045513 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.099134    1488 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	I1007 12:00:58.045519 1179332 out.go:358] Setting ErrFile to fd 2...
	I1007 12:00:58.045526 1179332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:01:08.046085 1179332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:01:08.062328 1179332 api_server.go:72] duration metric: took 2m37.131510279s to wait for apiserver process to appear ...
	I1007 12:01:08.062355 1179332 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:01:08.062391 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 12:01:08.062454 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 12:01:08.102422 1179332 cri.go:89] found id: "2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea"
	I1007 12:01:08.102448 1179332 cri.go:89] found id: ""
	I1007 12:01:08.102456 1179332 logs.go:282] 1 containers: [2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea]
	I1007 12:01:08.102523 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:08.106349 1179332 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 12:01:08.106425 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 12:01:08.150691 1179332 cri.go:89] found id: "ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43"
	I1007 12:01:08.150716 1179332 cri.go:89] found id: ""
	I1007 12:01:08.150725 1179332 logs.go:282] 1 containers: [ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43]
	I1007 12:01:08.150792 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:08.154462 1179332 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 12:01:08.154543 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 12:01:08.195341 1179332 cri.go:89] found id: "c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc"
	I1007 12:01:08.195365 1179332 cri.go:89] found id: ""
	I1007 12:01:08.195373 1179332 logs.go:282] 1 containers: [c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc]
	I1007 12:01:08.195431 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:08.198978 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 12:01:08.199063 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 12:01:08.241625 1179332 cri.go:89] found id: "cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8"
	I1007 12:01:08.241649 1179332 cri.go:89] found id: ""
	I1007 12:01:08.241657 1179332 logs.go:282] 1 containers: [cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8]
	I1007 12:01:08.241716 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:08.245407 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 12:01:08.245480 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 12:01:08.284240 1179332 cri.go:89] found id: "fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916"
	I1007 12:01:08.284310 1179332 cri.go:89] found id: ""
	I1007 12:01:08.284318 1179332 logs.go:282] 1 containers: [fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916]
	I1007 12:01:08.284382 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:08.287827 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 12:01:08.287923 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 12:01:08.332475 1179332 cri.go:89] found id: "09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a"
	I1007 12:01:08.332500 1179332 cri.go:89] found id: ""
	I1007 12:01:08.332508 1179332 logs.go:282] 1 containers: [09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a]
	I1007 12:01:08.332566 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:08.336647 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 12:01:08.336722 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 12:01:08.384557 1179332 cri.go:89] found id: "82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4"
	I1007 12:01:08.384580 1179332 cri.go:89] found id: ""
	I1007 12:01:08.384588 1179332 logs.go:282] 1 containers: [82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4]
	I1007 12:01:08.384647 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:08.388151 1179332 logs.go:123] Gathering logs for dmesg ...
	I1007 12:01:08.388178 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 12:01:08.404446 1179332 logs.go:123] Gathering logs for kube-apiserver [2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea] ...
	I1007 12:01:08.404477 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea"
	I1007 12:01:08.468061 1179332 logs.go:123] Gathering logs for coredns [c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc] ...
	I1007 12:01:08.468094 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc"
	I1007 12:01:08.511810 1179332 logs.go:123] Gathering logs for kube-proxy [fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916] ...
	I1007 12:01:08.511842 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916"
	I1007 12:01:08.551880 1179332 logs.go:123] Gathering logs for kube-controller-manager [09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a] ...
	I1007 12:01:08.551907 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a"
	I1007 12:01:08.625208 1179332 logs.go:123] Gathering logs for kindnet [82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4] ...
	I1007 12:01:08.625244 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4"
	I1007 12:01:08.671264 1179332 logs.go:123] Gathering logs for CRI-O ...
	I1007 12:01:08.671295 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 12:01:08.765237 1179332 logs.go:123] Gathering logs for container status ...
	I1007 12:01:08.765274 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 12:01:08.821720 1179332 logs.go:123] Gathering logs for kubelet ...
	I1007 12:01:08.821760 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 12:01:08.885555 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: W1007 11:58:36.789481    1488 reflector.go:561] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-504513' and this object
	W1007 12:01:08.885866 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: E1007 11:58:36.789546    1488 reflector.go:158] "Unhandled Error" err="object-\"gadget\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gadget\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:08.886059 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: W1007 11:58:36.789481    1488 reflector.go:561] object-"gadget"/"gadget": failed to list *v1.ConfigMap: configmaps "gadget" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-504513' and this object
	W1007 12:01:08.886294 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: E1007 11:58:36.789593    1488 reflector.go:158] "Unhandled Error" err="object-\"gadget\"/\"gadget\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"gadget\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gadget\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:08.891885 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.078980    1488 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-504513' and this object
	W1007 12:01:08.892163 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079028    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:08.892369 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.079441    1488 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-504513" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-504513' and this object
	W1007 12:01:08.892601 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079474    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:08.893340 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.099084    1488 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-504513' and this object
	W1007 12:01:08.893589 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.099134    1488 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	I1007 12:01:08.946059 1179332 logs.go:123] Gathering logs for describe nodes ...
	I1007 12:01:08.946115 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 12:01:09.111731 1179332 logs.go:123] Gathering logs for etcd [ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43] ...
	I1007 12:01:09.111764 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43"
	I1007 12:01:09.166562 1179332 logs.go:123] Gathering logs for kube-scheduler [cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8] ...
	I1007 12:01:09.166600 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8"
	I1007 12:01:09.213476 1179332 out.go:358] Setting ErrFile to fd 2...
	I1007 12:01:09.213504 1179332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 12:01:09.213577 1179332 out.go:270] X Problems detected in kubelet:
	W1007 12:01:09.213595 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079028    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:09.213601 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.079441    1488 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-504513" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-504513' and this object
	W1007 12:01:09.213760 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079474    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:09.213769 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.099084    1488 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-504513' and this object
	W1007 12:01:09.213778 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.099134    1488 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	I1007 12:01:09.213785 1179332 out.go:358] Setting ErrFile to fd 2...
	I1007 12:01:09.213798 1179332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:01:19.214945 1179332 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:01:19.222767 1179332 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1007 12:01:19.223808 1179332 api_server.go:141] control plane version: v1.31.1
	I1007 12:01:19.223835 1179332 api_server.go:131] duration metric: took 11.161471885s to wait for apiserver health ...
	I1007 12:01:19.223844 1179332 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:01:19.223865 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 12:01:19.223930 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 12:01:19.264580 1179332 cri.go:89] found id: "2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea"
	I1007 12:01:19.264648 1179332 cri.go:89] found id: ""
	I1007 12:01:19.264672 1179332 logs.go:282] 1 containers: [2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea]
	I1007 12:01:19.264742 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:19.268092 1179332 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 12:01:19.268162 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 12:01:19.307042 1179332 cri.go:89] found id: "ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43"
	I1007 12:01:19.307071 1179332 cri.go:89] found id: ""
	I1007 12:01:19.307081 1179332 logs.go:282] 1 containers: [ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43]
	I1007 12:01:19.307149 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:19.310907 1179332 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 12:01:19.310985 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 12:01:19.350991 1179332 cri.go:89] found id: "c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc"
	I1007 12:01:19.351013 1179332 cri.go:89] found id: ""
	I1007 12:01:19.351021 1179332 logs.go:282] 1 containers: [c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc]
	I1007 12:01:19.351081 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:19.354652 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 12:01:19.354726 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 12:01:19.391597 1179332 cri.go:89] found id: "cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8"
	I1007 12:01:19.391673 1179332 cri.go:89] found id: ""
	I1007 12:01:19.391695 1179332 logs.go:282] 1 containers: [cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8]
	I1007 12:01:19.391772 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:19.395203 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 12:01:19.395265 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 12:01:19.433148 1179332 cri.go:89] found id: "fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916"
	I1007 12:01:19.433176 1179332 cri.go:89] found id: ""
	I1007 12:01:19.433185 1179332 logs.go:282] 1 containers: [fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916]
	I1007 12:01:19.433273 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:19.437060 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 12:01:19.437162 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 12:01:19.481229 1179332 cri.go:89] found id: "09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a"
	I1007 12:01:19.481260 1179332 cri.go:89] found id: ""
	I1007 12:01:19.481269 1179332 logs.go:282] 1 containers: [09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a]
	I1007 12:01:19.481346 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:19.485249 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 12:01:19.485371 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 12:01:19.525804 1179332 cri.go:89] found id: "82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4"
	I1007 12:01:19.525827 1179332 cri.go:89] found id: ""
	I1007 12:01:19.525836 1179332 logs.go:282] 1 containers: [82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4]
	I1007 12:01:19.525894 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:19.529508 1179332 logs.go:123] Gathering logs for kubelet ...
	I1007 12:01:19.529534 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 12:01:19.588137 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: W1007 11:58:36.789481    1488 reflector.go:561] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-504513' and this object
	W1007 12:01:19.588403 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: E1007 11:58:36.789546    1488 reflector.go:158] "Unhandled Error" err="object-\"gadget\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gadget\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:19.588572 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: W1007 11:58:36.789481    1488 reflector.go:561] object-"gadget"/"gadget": failed to list *v1.ConfigMap: configmaps "gadget" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-504513' and this object
	W1007 12:01:19.588780 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: E1007 11:58:36.789593    1488 reflector.go:158] "Unhandled Error" err="object-\"gadget\"/\"gadget\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"gadget\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gadget\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:19.594272 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.078980    1488 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-504513' and this object
	W1007 12:01:19.594492 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079028    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:19.594658 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.079441    1488 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-504513" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-504513' and this object
	W1007 12:01:19.594870 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079474    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:19.595582 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.099084    1488 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-504513' and this object
	W1007 12:01:19.595810 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.099134    1488 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	I1007 12:01:19.634068 1179332 logs.go:123] Gathering logs for kube-apiserver [2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea] ...
	I1007 12:01:19.634099 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea"
	I1007 12:01:19.688673 1179332 logs.go:123] Gathering logs for kube-scheduler [cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8] ...
	I1007 12:01:19.688712 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8"
	I1007 12:01:19.740630 1179332 logs.go:123] Gathering logs for kindnet [82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4] ...
	I1007 12:01:19.740670 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4"
	I1007 12:01:19.781961 1179332 logs.go:123] Gathering logs for CRI-O ...
	I1007 12:01:19.781999 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 12:01:19.875352 1179332 logs.go:123] Gathering logs for container status ...
	I1007 12:01:19.875395 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 12:01:19.928974 1179332 logs.go:123] Gathering logs for dmesg ...
	I1007 12:01:19.929061 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 12:01:19.948652 1179332 logs.go:123] Gathering logs for describe nodes ...
	I1007 12:01:19.948684 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 12:01:20.091457 1179332 logs.go:123] Gathering logs for etcd [ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43] ...
	I1007 12:01:20.091505 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43"
	I1007 12:01:20.138011 1179332 logs.go:123] Gathering logs for coredns [c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc] ...
	I1007 12:01:20.138042 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc"
	I1007 12:01:20.194946 1179332 logs.go:123] Gathering logs for kube-proxy [fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916] ...
	I1007 12:01:20.194982 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916"
	I1007 12:01:20.237494 1179332 logs.go:123] Gathering logs for kube-controller-manager [09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a] ...
	I1007 12:01:20.237526 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a"
	I1007 12:01:20.311415 1179332 out.go:358] Setting ErrFile to fd 2...
	I1007 12:01:20.311445 1179332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 12:01:20.311505 1179332 out.go:270] X Problems detected in kubelet:
	W1007 12:01:20.311520 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079028    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:20.311529 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.079441    1488 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-504513" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-504513' and this object
	W1007 12:01:20.311544 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079474    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:20.311551 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.099084    1488 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-504513' and this object
	W1007 12:01:20.311558 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.099134    1488 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	I1007 12:01:20.311571 1179332 out.go:358] Setting ErrFile to fd 2...
	I1007 12:01:20.311577 1179332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:01:30.324091 1179332 system_pods.go:59] 18 kube-system pods found
	I1007 12:01:30.324129 1179332 system_pods.go:61] "coredns-7c65d6cfc9-g27sx" [5afe3dbe-0baa-43f6-ad8f-5390d1d0ae08] Running
	I1007 12:01:30.324137 1179332 system_pods.go:61] "csi-hostpath-attacher-0" [b11f0a35-8e10-4fe5-85bf-566d75b11483] Running
	I1007 12:01:30.324142 1179332 system_pods.go:61] "csi-hostpath-resizer-0" [07b093a3-8c0f-4e12-a54b-1fdbf5c0baad] Running
	I1007 12:01:30.324147 1179332 system_pods.go:61] "csi-hostpathplugin-pwkd9" [01e688b9-41ec-4a0d-bafe-5c808db8abae] Running
	I1007 12:01:30.324151 1179332 system_pods.go:61] "etcd-addons-504513" [4c452ee8-38c8-48c1-8e67-076ea6a91a1d] Running
	I1007 12:01:30.324156 1179332 system_pods.go:61] "kindnet-mg82f" [c5a2e036-ec86-4a4f-9367-5a435dbc6aae] Running
	I1007 12:01:30.324160 1179332 system_pods.go:61] "kube-apiserver-addons-504513" [4e671cad-3b64-40e2-af15-fa2bc3fa1163] Running
	I1007 12:01:30.324165 1179332 system_pods.go:61] "kube-controller-manager-addons-504513" [9d6bbb18-052d-4221-8b09-f8cda0278a8a] Running
	I1007 12:01:30.324169 1179332 system_pods.go:61] "kube-ingress-dns-minikube" [af553361-9217-4f39-9943-971471f491a9] Running
	I1007 12:01:30.324174 1179332 system_pods.go:61] "kube-proxy-j4dwf" [7fe779f0-fb2d-41bd-bdb2-992cd98ca14c] Running
	I1007 12:01:30.324178 1179332 system_pods.go:61] "kube-scheduler-addons-504513" [469da8da-0f7a-4471-9aa5-5f9983d57e88] Running
	I1007 12:01:30.324183 1179332 system_pods.go:61] "metrics-server-84c5f94fbc-zzgph" [daa11124-8d8b-41b4-8005-50023acf5391] Running
	I1007 12:01:30.324194 1179332 system_pods.go:61] "nvidia-device-plugin-daemonset-zfrr9" [c8079eb2-5614-417f-b0b4-df99129833bd] Running
	I1007 12:01:30.324198 1179332 system_pods.go:61] "registry-66c9cd494c-fb9ws" [b8858fa3-9d16-4d5e-ba15-1cb90ece82b4] Running
	I1007 12:01:30.324203 1179332 system_pods.go:61] "registry-proxy-j7gr2" [2a98cc91-7c93-4911-ac0f-e807e5996a10] Running
	I1007 12:01:30.324207 1179332 system_pods.go:61] "snapshot-controller-56fcc65765-klwff" [41d64bb2-6bad-487c-9674-178e8ad3e59f] Running
	I1007 12:01:30.324213 1179332 system_pods.go:61] "snapshot-controller-56fcc65765-xlccl" [7f67806b-c9c0-45a1-aa15-8515c20f3073] Running
	I1007 12:01:30.324218 1179332 system_pods.go:61] "storage-provisioner" [942e5d23-1e6b-4fa6-a249-26972b7daa1d] Running
	I1007 12:01:30.324227 1179332 system_pods.go:74] duration metric: took 11.100375842s to wait for pod list to return data ...
	I1007 12:01:30.324238 1179332 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:01:30.326984 1179332 default_sa.go:45] found service account: "default"
	I1007 12:01:30.327015 1179332 default_sa.go:55] duration metric: took 2.771412ms for default service account to be created ...
	I1007 12:01:30.327026 1179332 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:01:30.337757 1179332 system_pods.go:86] 18 kube-system pods found
	I1007 12:01:30.337802 1179332 system_pods.go:89] "coredns-7c65d6cfc9-g27sx" [5afe3dbe-0baa-43f6-ad8f-5390d1d0ae08] Running
	I1007 12:01:30.337811 1179332 system_pods.go:89] "csi-hostpath-attacher-0" [b11f0a35-8e10-4fe5-85bf-566d75b11483] Running
	I1007 12:01:30.337817 1179332 system_pods.go:89] "csi-hostpath-resizer-0" [07b093a3-8c0f-4e12-a54b-1fdbf5c0baad] Running
	I1007 12:01:30.337823 1179332 system_pods.go:89] "csi-hostpathplugin-pwkd9" [01e688b9-41ec-4a0d-bafe-5c808db8abae] Running
	I1007 12:01:30.337828 1179332 system_pods.go:89] "etcd-addons-504513" [4c452ee8-38c8-48c1-8e67-076ea6a91a1d] Running
	I1007 12:01:30.337835 1179332 system_pods.go:89] "kindnet-mg82f" [c5a2e036-ec86-4a4f-9367-5a435dbc6aae] Running
	I1007 12:01:30.337841 1179332 system_pods.go:89] "kube-apiserver-addons-504513" [4e671cad-3b64-40e2-af15-fa2bc3fa1163] Running
	I1007 12:01:30.337846 1179332 system_pods.go:89] "kube-controller-manager-addons-504513" [9d6bbb18-052d-4221-8b09-f8cda0278a8a] Running
	I1007 12:01:30.337858 1179332 system_pods.go:89] "kube-ingress-dns-minikube" [af553361-9217-4f39-9943-971471f491a9] Running
	I1007 12:01:30.337862 1179332 system_pods.go:89] "kube-proxy-j4dwf" [7fe779f0-fb2d-41bd-bdb2-992cd98ca14c] Running
	I1007 12:01:30.337868 1179332 system_pods.go:89] "kube-scheduler-addons-504513" [469da8da-0f7a-4471-9aa5-5f9983d57e88] Running
	I1007 12:01:30.337877 1179332 system_pods.go:89] "metrics-server-84c5f94fbc-zzgph" [daa11124-8d8b-41b4-8005-50023acf5391] Running
	I1007 12:01:30.337882 1179332 system_pods.go:89] "nvidia-device-plugin-daemonset-zfrr9" [c8079eb2-5614-417f-b0b4-df99129833bd] Running
	I1007 12:01:30.337885 1179332 system_pods.go:89] "registry-66c9cd494c-fb9ws" [b8858fa3-9d16-4d5e-ba15-1cb90ece82b4] Running
	I1007 12:01:30.337891 1179332 system_pods.go:89] "registry-proxy-j7gr2" [2a98cc91-7c93-4911-ac0f-e807e5996a10] Running
	I1007 12:01:30.337898 1179332 system_pods.go:89] "snapshot-controller-56fcc65765-klwff" [41d64bb2-6bad-487c-9674-178e8ad3e59f] Running
	I1007 12:01:30.337902 1179332 system_pods.go:89] "snapshot-controller-56fcc65765-xlccl" [7f67806b-c9c0-45a1-aa15-8515c20f3073] Running
	I1007 12:01:30.337907 1179332 system_pods.go:89] "storage-provisioner" [942e5d23-1e6b-4fa6-a249-26972b7daa1d] Running
	I1007 12:01:30.337919 1179332 system_pods.go:126] duration metric: took 10.887374ms to wait for k8s-apps to be running ...
	I1007 12:01:30.337928 1179332 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:01:30.337992 1179332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:01:30.351956 1179332 system_svc.go:56] duration metric: took 14.001514ms WaitForService to wait for kubelet
	I1007 12:01:30.351990 1179332 kubeadm.go:582] duration metric: took 2m59.421176533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:01:30.352012 1179332 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:01:30.355352 1179332 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 12:01:30.355395 1179332 node_conditions.go:123] node cpu capacity is 2
	I1007 12:01:30.355408 1179332 node_conditions.go:105] duration metric: took 3.38068ms to run NodePressure ...
	I1007 12:01:30.355422 1179332 start.go:241] waiting for startup goroutines ...
	I1007 12:01:30.355429 1179332 start.go:246] waiting for cluster config update ...
	I1007 12:01:30.355446 1179332 start.go:255] writing updated cluster config ...
	I1007 12:01:30.355748 1179332 ssh_runner.go:195] Run: rm -f paused
	I1007 12:01:30.420515 1179332 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 12:01:30.424304 1179332 out.go:177] * Done! kubectl is now configured to use "addons-504513" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 12:12:15 addons-504513 crio[962]: time="2024-10-07 12:12:15.249596436Z" level=info msg="Started container" PID=13243 containerID=4595bc9d59c715bb9142259e93c96323747cd046ff5838ea7d8d353ce7d3cfea description=default/busybox/busybox id=37330f7c-84d5-4957-a981-f7a617b6098b name=/runtime.v1.RuntimeService/StartContainer sandboxID=06d9e375d2d09629d080843ea5fa6b191ba4f0cf93c7edd08a304b7ce566d81e
	Oct 07 12:13:12 addons-504513 crio[962]: time="2024-10-07 12:13:12.375303493Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-6kpzh/POD" id=fb300cbb-ac54-42f6-9f4e-80d7e91785ba name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 07 12:13:12 addons-504513 crio[962]: time="2024-10-07 12:13:12.375366410Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 07 12:13:12 addons-504513 crio[962]: time="2024-10-07 12:13:12.406659629Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-6kpzh Namespace:default ID:92d20e27119547a917effe7472fe66db994ecb66180958af58c7c3c35d3ac1b3 UID:47a1142d-d7b3-4ebc-81d5-c76d3a9b1ecf NetNS:/var/run/netns/2f4cf74f-aef6-4c78-a5fe-549b5c2a977a Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 07 12:13:12 addons-504513 crio[962]: time="2024-10-07 12:13:12.406839476Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-6kpzh to CNI network \"kindnet\" (type=ptp)"
	Oct 07 12:13:12 addons-504513 crio[962]: time="2024-10-07 12:13:12.419568669Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-6kpzh Namespace:default ID:92d20e27119547a917effe7472fe66db994ecb66180958af58c7c3c35d3ac1b3 UID:47a1142d-d7b3-4ebc-81d5-c76d3a9b1ecf NetNS:/var/run/netns/2f4cf74f-aef6-4c78-a5fe-549b5c2a977a Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 07 12:13:12 addons-504513 crio[962]: time="2024-10-07 12:13:12.419728569Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-6kpzh for CNI network kindnet (type=ptp)"
	Oct 07 12:13:12 addons-504513 crio[962]: time="2024-10-07 12:13:12.423484230Z" level=info msg="Ran pod sandbox 92d20e27119547a917effe7472fe66db994ecb66180958af58c7c3c35d3ac1b3 with infra container: default/hello-world-app-55bf9c44b4-6kpzh/POD" id=fb300cbb-ac54-42f6-9f4e-80d7e91785ba name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 07 12:13:12 addons-504513 crio[962]: time="2024-10-07 12:13:12.425211193Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=813d93ae-6b63-42d4-8a4c-161814fdddd0 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 12:13:12 addons-504513 crio[962]: time="2024-10-07 12:13:12.425437128Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=813d93ae-6b63-42d4-8a4c-161814fdddd0 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 12:13:12 addons-504513 crio[962]: time="2024-10-07 12:13:12.426230009Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=9f626385-ea42-44ad-b973-e55e45941ecd name=/runtime.v1.ImageService/PullImage
	Oct 07 12:13:12 addons-504513 crio[962]: time="2024-10-07 12:13:12.431470444Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 07 12:13:12 addons-504513 crio[962]: time="2024-10-07 12:13:12.716440100Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 07 12:13:13 addons-504513 crio[962]: time="2024-10-07 12:13:13.447917732Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=9f626385-ea42-44ad-b973-e55e45941ecd name=/runtime.v1.ImageService/PullImage
	Oct 07 12:13:13 addons-504513 crio[962]: time="2024-10-07 12:13:13.448951252Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5dd3cd00-9f01-483c-89c0-b5a2db09ce77 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 12:13:13 addons-504513 crio[962]: time="2024-10-07 12:13:13.449783518Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=5dd3cd00-9f01-483c-89c0-b5a2db09ce77 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 12:13:13 addons-504513 crio[962]: time="2024-10-07 12:13:13.453404615Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=599af1ee-fa0b-4ae1-8ef9-4e7b1cd2f458 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 12:13:13 addons-504513 crio[962]: time="2024-10-07 12:13:13.454117021Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=599af1ee-fa0b-4ae1-8ef9-4e7b1cd2f458 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 12:13:13 addons-504513 crio[962]: time="2024-10-07 12:13:13.454931686Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-6kpzh/hello-world-app" id=510393a2-5f1d-4d62-8539-b0abc1869a3d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 07 12:13:13 addons-504513 crio[962]: time="2024-10-07 12:13:13.455026915Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 07 12:13:13 addons-504513 crio[962]: time="2024-10-07 12:13:13.487253027Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2526e3ab05c6a801b0ede40160fbe356d4fc4d00c7e3b8fdb087a4186c6d1cae/merged/etc/passwd: no such file or directory"
	Oct 07 12:13:13 addons-504513 crio[962]: time="2024-10-07 12:13:13.487458252Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2526e3ab05c6a801b0ede40160fbe356d4fc4d00c7e3b8fdb087a4186c6d1cae/merged/etc/group: no such file or directory"
	Oct 07 12:13:13 addons-504513 crio[962]: time="2024-10-07 12:13:13.555239145Z" level=info msg="Created container da28ac1204dde7a5c18a691c39df8aebed3c501f8c660198cb719af014c80857: default/hello-world-app-55bf9c44b4-6kpzh/hello-world-app" id=510393a2-5f1d-4d62-8539-b0abc1869a3d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 07 12:13:13 addons-504513 crio[962]: time="2024-10-07 12:13:13.556160502Z" level=info msg="Starting container: da28ac1204dde7a5c18a691c39df8aebed3c501f8c660198cb719af014c80857" id=e0f1d0e6-11e4-4580-8309-dcbd0033d764 name=/runtime.v1.RuntimeService/StartContainer
	Oct 07 12:13:13 addons-504513 crio[962]: time="2024-10-07 12:13:13.567829819Z" level=info msg="Started container" PID=13433 containerID=da28ac1204dde7a5c18a691c39df8aebed3c501f8c660198cb719af014c80857 description=default/hello-world-app-55bf9c44b4-6kpzh/hello-world-app id=e0f1d0e6-11e4-4580-8309-dcbd0033d764 name=/runtime.v1.RuntimeService/StartContainer sandboxID=92d20e27119547a917effe7472fe66db994ecb66180958af58c7c3c35d3ac1b3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                       ATTEMPT             POD ID              POD
	da28ac1204dde       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app            0                   92d20e2711954       hello-world-app-55bf9c44b4-6kpzh
	4595bc9d59c71       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          58 seconds ago           Running             busybox                    0                   06d9e375d2d09       busybox
	20b2e23c95e1b       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago            Running             nginx                      0                   789bcbc8c471f       nginx
	16bf94c3fd730       registry.k8s.io/ingress-nginx/controller@sha256:22f9d129ae8c89a2cabbd13af3c1668944f3dd68fec186199b7024a0a2fc75b3             13 minutes ago           Running             controller                 0                   d24eda426e882       ingress-nginx-controller-bc57996ff-tm449
	0bcbcb1644b94       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                              13 minutes ago           Running             yakd                       0                   185d8808ad0bf       yakd-dashboard-67d98fc6b-w6jm6
	6da6a055d3971       gcr.io/cloud-spanner-emulator/emulator@sha256:6ce1265c73355797b34d2531c7146eed3996346f860517e35d1434182eb5f01d               13 minutes ago           Running             cloud-spanner-emulator     0                   1272452becbe2       cloud-spanner-emulator-5b584cc74-vr46n
	368c814bc16fc       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        13 minutes ago           Running             metrics-server             0                   b702d9dba195b       metrics-server-84c5f94fbc-zzgph
	5ec3755d0ea11       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             13 minutes ago           Running             local-path-provisioner     0                   e9511346954d0       local-path-provisioner-86d989889c-x566g
	f8380365a6a12       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             13 minutes ago           Running             minikube-ingress-dns       0                   1da9bd8d7c4d6       kube-ingress-dns-minikube
	c62d63e5421c6       420193b27261a8d37b9fb1faeed45094cefa47e72a7538fd5a6c05e8b5ce192e                                                             13 minutes ago           Exited              patch                      2                   c047c758a10f2       ingress-nginx-admission-patch-z2lbq
	14af3b2be5764       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   13 minutes ago           Exited              create                     0                   31bc11753707f       ingress-nginx-admission-create-46tms
	eb5214e40c18b       nvcr.io/nvidia/k8s-device-plugin@sha256:cdd05f9d89f0552478d46474005e86b98795ad364664f644225b99d94978e680                     13 minutes ago           Running             nvidia-device-plugin-ctr   0                   1932bb9824ec8       nvidia-device-plugin-daemonset-zfrr9
	5a5d902eb7092       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             13 minutes ago           Running             storage-provisioner        0                   155e335a997d7       storage-provisioner
	c60017af89967       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             13 minutes ago           Running             coredns                    0                   c07f24bd8fa65       coredns-7c65d6cfc9-g27sx
	fd40e0c547214       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                             14 minutes ago           Running             kube-proxy                 0                   a2234f27ea43b       kube-proxy-j4dwf
	82e9dcb708dff       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                             14 minutes ago           Running             kindnet-cni                0                   75a020e3a4985       kindnet-mg82f
	2f1eb19abef58       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                             14 minutes ago           Running             kube-apiserver             0                   9b8dd3b909ac4       kube-apiserver-addons-504513
	09fd038c50124       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                             14 minutes ago           Running             kube-controller-manager    0                   6f49b3f0d3ef2       kube-controller-manager-addons-504513
	cafddae5dc35a       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                             14 minutes ago           Running             kube-scheduler             0                   0165c7b27ab2a       kube-scheduler-addons-504513
	ea9071e39cce0       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             14 minutes ago           Running             etcd                       0                   881d912aca59e       etcd-addons-504513
	
	
	==> coredns [c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc] <==
	[INFO] 10.244.0.13:37033 - 48149 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002604775s
	[INFO] 10.244.0.13:37033 - 48980 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000192647s
	[INFO] 10.244.0.13:37033 - 51582 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000116028s
	[INFO] 10.244.0.13:42480 - 64645 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000613551s
	[INFO] 10.244.0.13:42480 - 64448 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000446004s
	[INFO] 10.244.0.13:41940 - 28653 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000082338s
	[INFO] 10.244.0.13:41940 - 28874 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00004343s
	[INFO] 10.244.0.13:59158 - 33403 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000405955s
	[INFO] 10.244.0.13:59158 - 33215 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000104327s
	[INFO] 10.244.0.13:54285 - 31752 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001361575s
	[INFO] 10.244.0.13:54285 - 32254 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.000965369s
	[INFO] 10.244.0.13:35030 - 47470 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000069595s
	[INFO] 10.244.0.13:35030 - 47618 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000055549s
	[INFO] 10.244.0.20:55840 - 24757 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000156979s
	[INFO] 10.244.0.20:53300 - 51720 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000089912s
	[INFO] 10.244.0.20:42287 - 55082 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122387s
	[INFO] 10.244.0.20:56901 - 28315 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000134088s
	[INFO] 10.244.0.20:34995 - 53807 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000124061s
	[INFO] 10.244.0.20:35059 - 13287 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090075s
	[INFO] 10.244.0.20:57744 - 11887 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002337791s
	[INFO] 10.244.0.20:39434 - 7626 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002128914s
	[INFO] 10.244.0.20:40787 - 8238 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001412693s
	[INFO] 10.244.0.20:40856 - 23653 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001987819s
	[INFO] 10.244.0.23:35395 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000182046s
	[INFO] 10.244.0.23:33137 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000112541s
	
	
	==> describe nodes <==
	Name:               addons-504513
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-504513
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=addons-504513
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T11_58_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-504513
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 11:58:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-504513
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:13:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:12:32 +0000   Mon, 07 Oct 2024 11:58:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:12:32 +0000   Mon, 07 Oct 2024 11:58:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:12:32 +0000   Mon, 07 Oct 2024 11:58:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:12:32 +0000   Mon, 07 Oct 2024 11:59:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    addons-504513
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 d9f2535c52294194a698e057647c458a
	  System UUID:                ce552362-e2a4-4a6f-95fb-4dd9841bc164
	  Boot ID:                    9a8fefe6-3962-4cb9-809a-2b740ac8992f
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     cloud-spanner-emulator-5b584cc74-vr46n      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-6kpzh            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-tm449    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         14m
	  kube-system                 coredns-7c65d6cfc9-g27sx                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-addons-504513                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-mg82f                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-addons-504513                250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-504513       200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-j4dwf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-504513                100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-84c5f94fbc-zzgph             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         14m
	  kube-system                 nvidia-device-plugin-daemonset-zfrr9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          local-path-provisioner-86d989889c-x566g     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-w6jm6              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 14m   kube-proxy       
	  Normal   Starting                 14m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  14m   kubelet          Node addons-504513 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m   kubelet          Node addons-504513 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m   kubelet          Node addons-504513 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m   node-controller  Node addons-504513 event: Registered Node addons-504513 in Controller
	  Normal   NodeReady                13m   kubelet          Node addons-504513 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 7 11:30] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43] <==
	{"level":"info","ts":"2024-10-07T11:58:21.232298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-07T11:58:21.232348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-07T11:58:21.232364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2024-10-07T11:58:21.232377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2024-10-07T11:58:21.232384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-10-07T11:58:21.232394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2024-10-07T11:58:21.232402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-10-07T11:58:21.240316Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:58:21.244443Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:addons-504513 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-07T11:58:21.244645Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:58:21.244714Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:58:21.244736Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:58:21.244750Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T11:58:21.244974Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T11:58:21.245621Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T11:58:21.246473Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2024-10-07T11:58:21.247051Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T11:58:21.247863Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-07T11:58:21.263456Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-07T11:58:21.263492Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-07T11:58:33.056985Z","caller":"traceutil/trace.go:171","msg":"trace[708153501] transaction","detail":"{read_only:false; response_revision:320; number_of_response:1; }","duration":"111.67362ms","start":"2024-10-07T11:58:32.945295Z","end":"2024-10-07T11:58:33.056969Z","steps":["trace[708153501] 'process raft request'  (duration: 111.348585ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T11:58:33.416827Z","caller":"traceutil/trace.go:171","msg":"trace[499780291] transaction","detail":"{read_only:false; response_revision:324; number_of_response:1; }","duration":"107.545974ms","start":"2024-10-07T11:58:33.309265Z","end":"2024-10-07T11:58:33.416811Z","steps":["trace[499780291] 'process raft request'  (duration: 107.418443ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T12:08:21.499271Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1479}
	{"level":"info","ts":"2024-10-07T12:08:21.529619Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1479,"took":"29.91558ms","hash":2429785176,"current-db-size-bytes":6074368,"current-db-size":"6.1 MB","current-db-size-in-use-bytes":3039232,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2024-10-07T12:08:21.529675Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2429785176,"revision":1479,"compact-revision":-1}
	
	
	==> kernel <==
	 12:13:14 up  7:55,  0 users,  load average: 0.27, 0.34, 1.05
	Linux addons-504513 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4] <==
	I1007 12:11:05.468962       1 main.go:299] handling current node
	I1007 12:11:15.472326       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:11:15.472359       1 main.go:299] handling current node
	I1007 12:11:25.476312       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:11:25.476345       1 main.go:299] handling current node
	I1007 12:11:35.469384       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:11:35.469433       1 main.go:299] handling current node
	I1007 12:11:45.469155       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:11:45.469312       1 main.go:299] handling current node
	I1007 12:11:55.476327       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:11:55.476362       1 main.go:299] handling current node
	I1007 12:12:05.473267       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:12:05.473301       1 main.go:299] handling current node
	I1007 12:12:15.469625       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:12:15.469666       1 main.go:299] handling current node
	I1007 12:12:25.476334       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:12:25.476368       1 main.go:299] handling current node
	I1007 12:12:35.469625       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:12:35.469662       1 main.go:299] handling current node
	I1007 12:12:45.469904       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:12:45.470033       1 main.go:299] handling current node
	I1007 12:12:55.472281       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:12:55.472407       1 main.go:299] handling current node
	I1007 12:13:05.469866       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:13:05.469900       1 main.go:299] handling current node
	
	
	==> kube-apiserver [2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1007 12:00:56.722308       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.76.92:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.76.92:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.76.92:443: connect: connection refused" logger="UnhandledError"
	E1007 12:00:56.725228       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.76.92:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.76.92:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.76.92:443: connect: connection refused" logger="UnhandledError"
	E1007 12:00:56.729046       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.76.92:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.76.92:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.76.92:443: connect: connection refused" logger="UnhandledError"
	I1007 12:00:56.817604       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1007 12:09:43.776650       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.13.90"}
	I1007 12:10:19.332987       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1007 12:10:32.090675       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 12:10:32.090815       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 12:10:32.124001       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 12:10:32.124097       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 12:10:32.143896       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 12:10:32.144013       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 12:10:32.171510       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 12:10:32.172950       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 12:10:32.297298       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 12:10:32.297360       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1007 12:10:33.172617       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1007 12:10:33.297504       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1007 12:10:33.305258       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1007 12:10:45.920707       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1007 12:10:51.479943       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1007 12:10:51.774459       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.216.123"}
	I1007 12:13:12.310959       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.63.126"}
	
	
	==> kube-controller-manager [09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a] <==
	W1007 12:11:06.353038       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:11:06.353094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:11:26.663097       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:11:26.663242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:11:49.587750       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:11:49.587793       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:11:50.083147       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:11:50.083199       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:11:51.020571       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:11:51.020616       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:12:00.730852       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:12:00.730919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:12:31.259490       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:12:31.259532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 12:12:32.983981       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-504513"
	W1007 12:12:42.161120       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:12:42.161300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:12:42.737280       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:12:42.737323       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:12:59.755209       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:12:59.755252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 12:13:12.074877       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="34.105165ms"
	I1007 12:13:12.083771       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="8.68916ms"
	I1007 12:13:12.084017       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="49.673µs"
	I1007 12:13:12.091927       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="704.8µs"
	
	
	==> kube-proxy [fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916] <==
	I1007 11:58:36.726948       1 server_linux.go:66] "Using iptables proxy"
	I1007 11:58:37.170988       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.58.2"]
	E1007 11:58:37.171199       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 11:58:37.431764       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1007 11:58:37.431897       1 server_linux.go:169] "Using iptables Proxier"
	I1007 11:58:37.433758       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 11:58:37.434159       1 server.go:483] "Version info" version="v1.31.1"
	I1007 11:58:37.434425       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 11:58:37.442472       1 config.go:199] "Starting service config controller"
	I1007 11:58:37.442559       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 11:58:37.442622       1 config.go:105] "Starting endpoint slice config controller"
	I1007 11:58:37.442652       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 11:58:37.451350       1 config.go:328] "Starting node config controller"
	I1007 11:58:37.451485       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 11:58:37.547337       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 11:58:37.547617       1 shared_informer.go:320] Caches are synced for service config
	I1007 11:58:37.552363       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8] <==
	E1007 11:58:24.659403       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:58:24.659517       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 11:58:24.659573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E1007 11:58:24.659651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:58:24.659843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 11:58:24.659917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:58:24.659951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 11:58:24.660037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:58:24.660460       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 11:58:24.660542       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1007 11:58:24.660943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 11:58:24.661010       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:58:24.661149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 11:58:24.661202       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:58:24.661308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1007 11:58:24.661355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:58:24.661461       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 11:58:24.661507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:58:24.661657       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 11:58:24.661709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:58:24.661821       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 11:58:24.661871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 11:58:24.661963       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 11:58:24.662023       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1007 11:58:26.252435       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 12:11:56 addons-504513 kubelet[1488]: E1007 12:11:56.443233    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303116442983747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:549591,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:11:56 addons-504513 kubelet[1488]: E1007 12:11:56.443272    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303116442983747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:549591,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:11:58 addons-504513 kubelet[1488]: I1007 12:11:58.188159    1488 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 12:11:58 addons-504513 kubelet[1488]: E1007 12:11:58.189741    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="603bf7a0-7f9c-4a72-985b-e5db3c9ca21c"
	Oct 07 12:12:06 addons-504513 kubelet[1488]: E1007 12:12:06.445932    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303126445699927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:549591,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:06 addons-504513 kubelet[1488]: E1007 12:12:06.445969    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303126445699927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:549591,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:12 addons-504513 kubelet[1488]: I1007 12:12:12.187930    1488 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 12:12:15 addons-504513 kubelet[1488]: I1007 12:12:15.370548    1488 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 12:12:15 addons-504513 kubelet[1488]: I1007 12:12:15.381194    1488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=82.327859876 podStartE2EDuration="1m24.3811757s" podCreationTimestamp="2024-10-07 12:10:51 +0000 UTC" firstStartedPulling="2024-10-07 12:10:52.057222337 +0000 UTC m=+746.008185212" lastFinishedPulling="2024-10-07 12:10:54.110538161 +0000 UTC m=+748.061501036" observedRunningTime="2024-10-07 12:10:54.221708987 +0000 UTC m=+748.172671861" watchObservedRunningTime="2024-10-07 12:12:15.3811757 +0000 UTC m=+829.332138574"
	Oct 07 12:12:16 addons-504513 kubelet[1488]: E1007 12:12:16.448370    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303136448089539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:558916,},InodesUsed:&UInt64Value{Value:213,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:16 addons-504513 kubelet[1488]: E1007 12:12:16.448842    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303136448089539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:558916,},InodesUsed:&UInt64Value{Value:213,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:26 addons-504513 kubelet[1488]: E1007 12:12:26.451505    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303146451276104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:558916,},InodesUsed:&UInt64Value{Value:213,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:26 addons-504513 kubelet[1488]: E1007 12:12:26.451554    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303146451276104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:558916,},InodesUsed:&UInt64Value{Value:213,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:36 addons-504513 kubelet[1488]: E1007 12:12:36.454186    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303156453952045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:558916,},InodesUsed:&UInt64Value{Value:213,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:36 addons-504513 kubelet[1488]: E1007 12:12:36.454223    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303156453952045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:558916,},InodesUsed:&UInt64Value{Value:213,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:46 addons-504513 kubelet[1488]: E1007 12:12:46.456857    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303166456618183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:558916,},InodesUsed:&UInt64Value{Value:213,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:46 addons-504513 kubelet[1488]: E1007 12:12:46.456902    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303166456618183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:558916,},InodesUsed:&UInt64Value{Value:213,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:47 addons-504513 kubelet[1488]: I1007 12:12:47.188035    1488 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-5b584cc74-vr46n" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 12:12:47 addons-504513 kubelet[1488]: I1007 12:12:47.188055    1488 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-zfrr9" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 12:12:56 addons-504513 kubelet[1488]: E1007 12:12:56.458960    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303176458743529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:558916,},InodesUsed:&UInt64Value{Value:213,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:56 addons-504513 kubelet[1488]: E1007 12:12:56.458998    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303176458743529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:558916,},InodesUsed:&UInt64Value{Value:213,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:06 addons-504513 kubelet[1488]: E1007 12:13:06.461314    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303186461073281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:558916,},InodesUsed:&UInt64Value{Value:213,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:06 addons-504513 kubelet[1488]: E1007 12:13:06.461355    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303186461073281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:558916,},InodesUsed:&UInt64Value{Value:213,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:12 addons-504513 kubelet[1488]: I1007 12:13:12.073327    1488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=57.406090869 podStartE2EDuration="11m41.0733069s" podCreationTimestamp="2024-10-07 12:01:31 +0000 UTC" firstStartedPulling="2024-10-07 12:01:31.508144955 +0000 UTC m=+185.459107830" lastFinishedPulling="2024-10-07 12:12:15.175360978 +0000 UTC m=+829.126323861" observedRunningTime="2024-10-07 12:12:15.382162303 +0000 UTC m=+829.333125177" watchObservedRunningTime="2024-10-07 12:13:12.0733069 +0000 UTC m=+886.024269783"
	Oct 07 12:13:12 addons-504513 kubelet[1488]: I1007 12:13:12.192493    1488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfw2w\" (UniqueName: \"kubernetes.io/projected/47a1142d-d7b3-4ebc-81d5-c76d3a9b1ecf-kube-api-access-vfw2w\") pod \"hello-world-app-55bf9c44b4-6kpzh\" (UID: \"47a1142d-d7b3-4ebc-81d5-c76d3a9b1ecf\") " pod="default/hello-world-app-55bf9c44b4-6kpzh"
	
	
	==> storage-provisioner [5a5d902eb70920ddbf3acd681555c221118e7498466da95d2b36224cb168560b] <==
	I1007 11:59:17.143747       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 11:59:17.159464       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 11:59:17.159542       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 11:59:17.168861       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 11:59:17.169114       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-504513_4bd8a0fd-b92b-4d3c-99a1-0b6504c0ad34!
	I1007 11:59:17.169398       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bd91f230-4fa0-49d1-a01e-4a1414f60404", APIVersion:"v1", ResourceVersion:"881", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-504513_4bd8a0fd-b92b-4d3c-99a1-0b6504c0ad34 became leader
	I1007 11:59:17.272609       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-504513_4bd8a0fd-b92b-4d3c-99a1-0b6504c0ad34!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-504513 -n addons-504513
helpers_test.go:261: (dbg) Run:  kubectl --context addons-504513 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-46tms ingress-nginx-admission-patch-z2lbq
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-504513 describe pod ingress-nginx-admission-create-46tms ingress-nginx-admission-patch-z2lbq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-504513 describe pod ingress-nginx-admission-create-46tms ingress-nginx-admission-patch-z2lbq: exit status 1 (102.114365ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-46tms" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-z2lbq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-504513 describe pod ingress-nginx-admission-create-46tms ingress-nginx-admission-patch-z2lbq: exit status 1
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-504513 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-504513 addons disable ingress-dns --alsologtostderr -v=1: (1.343186364s)
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-504513 addons disable ingress --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-504513 addons disable ingress --alsologtostderr -v=1: (7.755807285s)
--- FAIL: TestAddons/parallel/Ingress (153.14s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (347.92s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 11.368545ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-zzgph" [daa11124-8d8b-41b4-8005-50023acf5391] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004221572s
addons_test.go:402: (dbg) Run:  kubectl --context addons-504513 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-504513 top pods -n kube-system: exit status 1 (116.698997ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-g27sx, age: 11m33.702938045s

                                                
                                                
** /stderr **
I1007 12:10:06.706111 1178462 retry.go:31] will retry after 4.151331874s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-504513 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-504513 top pods -n kube-system: exit status 1 (145.376676ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-g27sx, age: 11m38.000683333s

                                                
                                                
** /stderr **
I1007 12:10:11.003812 1178462 retry.go:31] will retry after 4.106875804s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-504513 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-504513 top pods -n kube-system: exit status 1 (102.495089ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-g27sx, age: 11m42.211195081s

                                                
                                                
** /stderr **
I1007 12:10:15.214471 1178462 retry.go:31] will retry after 5.052243497s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-504513 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-504513 top pods -n kube-system: exit status 1 (103.984286ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-g27sx, age: 11m47.365451672s

                                                
                                                
** /stderr **
I1007 12:10:20.371744 1178462 retry.go:31] will retry after 11.622036708s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-504513 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-504513 top pods -n kube-system: exit status 1 (149.792238ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-g27sx, age: 11m59.141020614s

                                                
                                                
** /stderr **
I1007 12:10:32.144188 1178462 retry.go:31] will retry after 13.195881206s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-504513 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-504513 top pods -n kube-system: exit status 1 (151.179585ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-g27sx, age: 12m12.48797928s

                                                
                                                
** /stderr **
I1007 12:10:45.491756 1178462 retry.go:31] will retry after 17.818996156s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-504513 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-504513 top pods -n kube-system: exit status 1 (86.558237ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-g27sx, age: 12m30.396297398s

                                                
                                                
** /stderr **
I1007 12:11:03.399322 1178462 retry.go:31] will retry after 23.640777334s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-504513 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-504513 top pods -n kube-system: exit status 1 (90.229426ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-g27sx, age: 12m54.131785854s

                                                
                                                
** /stderr **
I1007 12:11:27.134741 1178462 retry.go:31] will retry after 58.025810476s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-504513 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-504513 top pods -n kube-system: exit status 1 (88.98337ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-g27sx, age: 13m52.24565032s

                                                
                                                
** /stderr **
I1007 12:12:25.249867 1178462 retry.go:31] will retry after 1m3.189793664s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-504513 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-504513 top pods -n kube-system: exit status 1 (89.65214ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-g27sx, age: 14m55.527125268s

                                                
                                                
** /stderr **
I1007 12:13:28.530491 1178462 retry.go:31] will retry after 1m20.516065996s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-504513 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-504513 top pods -n kube-system: exit status 1 (93.330255ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-g27sx, age: 16m16.137723115s

                                                
                                                
** /stderr **
I1007 12:14:49.140857 1178462 retry.go:31] will retry after 56.282270492s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-504513 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-504513 top pods -n kube-system: exit status 1 (97.181588ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-g27sx, age: 17m12.521064207s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-504513
helpers_test.go:235: (dbg) docker inspect addons-504513:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "98bc47ee472dc808320d44cc1071573848e28022b6dae187fb3e2cc6aff79901",
	        "Created": "2024-10-07T11:58:04.033530051Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1179822,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-07T11:58:04.171244449Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/98bc47ee472dc808320d44cc1071573848e28022b6dae187fb3e2cc6aff79901/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/98bc47ee472dc808320d44cc1071573848e28022b6dae187fb3e2cc6aff79901/hostname",
	        "HostsPath": "/var/lib/docker/containers/98bc47ee472dc808320d44cc1071573848e28022b6dae187fb3e2cc6aff79901/hosts",
	        "LogPath": "/var/lib/docker/containers/98bc47ee472dc808320d44cc1071573848e28022b6dae187fb3e2cc6aff79901/98bc47ee472dc808320d44cc1071573848e28022b6dae187fb3e2cc6aff79901-json.log",
	        "Name": "/addons-504513",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-504513:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-504513",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d2e8a2f84ab49a114e991e24dd187b2ac0e96d8fd4ece15acb5092af38d18515-init/diff:/var/lib/docker/overlay2/679cc8fccbb0902884eb141037cc21fc6e7a2efac609a53e07ea6b92675ef1c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d2e8a2f84ab49a114e991e24dd187b2ac0e96d8fd4ece15acb5092af38d18515/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d2e8a2f84ab49a114e991e24dd187b2ac0e96d8fd4ece15acb5092af38d18515/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d2e8a2f84ab49a114e991e24dd187b2ac0e96d8fd4ece15acb5092af38d18515/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-504513",
	                "Source": "/var/lib/docker/volumes/addons-504513/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-504513",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-504513",
	                "name.minikube.sigs.k8s.io": "addons-504513",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2030c59475cbb20250f1152a5ce51d3293664eff342a56f7429e48c868124201",
	            "SandboxKey": "/var/run/docker/netns/2030c59475cb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34247"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34248"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34251"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34249"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34250"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-504513": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null,
	                    "NetworkID": "160722c35aa7eda7eed5d217de65189c1b1c5c2374872a33482a67b09fd2b7e1",
	                    "EndpointID": "ffc5a0c61c43377350cf42ab1a3675abf1fdf6ded06c3f8debe26cecdf627b13",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-504513",
	                        "98bc47ee472d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-504513 -n addons-504513
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-504513 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-504513 logs -n 25: (1.334420193s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-790369 | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC |                     |
	|         | download-docker-790369                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-790369                                                                   | download-docker-790369 | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC | 07 Oct 24 11:57 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-325982   | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC |                     |
	|         | binary-mirror-325982                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33869                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-325982                                                                     | binary-mirror-325982   | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC | 07 Oct 24 11:57 UTC |
	| addons  | disable dashboard -p                                                                        | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC |                     |
	|         | addons-504513                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC |                     |
	|         | addons-504513                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-504513 --wait=true                                                                | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC | 07 Oct 24 12:01 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-504513 addons disable                                                                | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:01 UTC | 07 Oct 24 12:01 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-504513 addons disable                                                                | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:09 UTC | 07 Oct 24 12:09 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:09 UTC | 07 Oct 24 12:09 UTC |
	|         | -p addons-504513                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-504513 addons disable                                                                | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:09 UTC | 07 Oct 24 12:10 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-504513 ip                                                                            | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:09 UTC | 07 Oct 24 12:09 UTC |
	| addons  | addons-504513 addons disable                                                                | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:09 UTC | 07 Oct 24 12:09 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-504513 addons                                                                        | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:10 UTC | 07 Oct 24 12:10 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-504513 addons                                                                        | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:10 UTC | 07 Oct 24 12:10 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-504513 addons                                                                        | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:10 UTC | 07 Oct 24 12:10 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-504513 ssh curl -s                                                                   | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-504513 ip                                                                            | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:13 UTC | 07 Oct 24 12:13 UTC |
	| addons  | addons-504513 addons disable                                                                | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:13 UTC | 07 Oct 24 12:13 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-504513 addons disable                                                                | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:13 UTC | 07 Oct 24 12:13 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:13 UTC | 07 Oct 24 12:13 UTC |
	|         | -p addons-504513                                                                            |                        |         |         |                     |                     |
	| addons  | addons-504513 addons disable                                                                | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:13 UTC | 07 Oct 24 12:13 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-504513 ssh cat                                                                       | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:13 UTC | 07 Oct 24 12:13 UTC |
	|         | /opt/local-path-provisioner/pvc-2b3d24e7-13fc-45fb-a4ba-0b05f67be457_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-504513 addons disable                                                                | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:13 UTC | 07 Oct 24 12:14 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-504513 addons                                                                        | addons-504513          | jenkins | v1.34.0 | 07 Oct 24 12:14 UTC | 07 Oct 24 12:14 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:57:57
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:57:57.259836 1179332 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:57:57.260029 1179332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:57:57.260055 1179332 out.go:358] Setting ErrFile to fd 2...
	I1007 11:57:57.260075 1179332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:57:57.260505 1179332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1173066/.minikube/bin
	I1007 11:57:57.261102 1179332 out.go:352] Setting JSON to false
	I1007 11:57:57.262044 1179332 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27622,"bootTime":1728274656,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1007 11:57:57.262163 1179332 start.go:139] virtualization:  
	I1007 11:57:57.264826 1179332 out.go:177] * [addons-504513] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 11:57:57.266875 1179332 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 11:57:57.266933 1179332 notify.go:220] Checking for updates...
	I1007 11:57:57.269994 1179332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:57:57.271490 1179332 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-1173066/kubeconfig
	I1007 11:57:57.273049 1179332 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1173066/.minikube
	I1007 11:57:57.274556 1179332 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 11:57:57.275949 1179332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 11:57:57.277874 1179332 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:57:57.297351 1179332 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 11:57:57.297481 1179332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 11:57:57.351160 1179332 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2024-10-07 11:57:57.341916389 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 11:57:57.351283 1179332 docker.go:318] overlay module found
	I1007 11:57:57.353198 1179332 out.go:177] * Using the docker driver based on user configuration
	I1007 11:57:57.354894 1179332 start.go:297] selected driver: docker
	I1007 11:57:57.354913 1179332 start.go:901] validating driver "docker" against <nil>
	I1007 11:57:57.354928 1179332 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 11:57:57.355592 1179332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 11:57:57.397830 1179332 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2024-10-07 11:57:57.388354841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 11:57:57.398052 1179332 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 11:57:57.398268 1179332 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 11:57:57.400517 1179332 out.go:177] * Using Docker driver with root privileges
	I1007 11:57:57.402404 1179332 cni.go:84] Creating CNI manager for ""
	I1007 11:57:57.402465 1179332 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 11:57:57.402479 1179332 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 11:57:57.402560 1179332 start.go:340] cluster config:
	{Name:addons-504513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-504513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:57:57.405228 1179332 out.go:177] * Starting "addons-504513" primary control-plane node in "addons-504513" cluster
	I1007 11:57:57.408474 1179332 cache.go:121] Beginning downloading kic base image for docker with crio
	I1007 11:57:57.410778 1179332 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 11:57:57.413157 1179332 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:57:57.413221 1179332 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-1173066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1007 11:57:57.413225 1179332 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 11:57:57.413232 1179332 cache.go:56] Caching tarball of preloaded images
	I1007 11:57:57.413315 1179332 preload.go:172] Found /home/jenkins/minikube-integration/19763-1173066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1007 11:57:57.413325 1179332 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 11:57:57.413656 1179332 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/config.json ...
	I1007 11:57:57.413683 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/config.json: {Name:mk638eb9b68aa8610ca27e26c5001fd39eddfc00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:57:57.430772 1179332 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1007 11:57:57.430794 1179332 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1007 11:57:57.430810 1179332 cache.go:194] Successfully downloaded all kic artifacts
	I1007 11:57:57.430843 1179332 start.go:360] acquireMachinesLock for addons-504513: {Name:mkbbf38566c8131810ffc8f50dd67d6eb8acc9e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:57:57.431323 1179332 start.go:364] duration metric: took 452.017µs to acquireMachinesLock for "addons-504513"
	I1007 11:57:57.431355 1179332 start.go:93] Provisioning new machine with config: &{Name:addons-504513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-504513 Namespace:default APIServerHAVIP: APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 11:57:57.431430 1179332 start.go:125] createHost starting for "" (driver="docker")
	I1007 11:57:57.433919 1179332 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1007 11:57:57.434146 1179332 start.go:159] libmachine.API.Create for "addons-504513" (driver="docker")
	I1007 11:57:57.434177 1179332 client.go:168] LocalClient.Create starting
	I1007 11:57:57.434274 1179332 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem
	I1007 11:57:57.826989 1179332 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/cert.pem
	I1007 11:57:58.382582 1179332 cli_runner.go:164] Run: docker network inspect addons-504513 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1007 11:57:58.397388 1179332 cli_runner.go:211] docker network inspect addons-504513 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1007 11:57:58.397471 1179332 network_create.go:284] running [docker network inspect addons-504513] to gather additional debugging logs...
	I1007 11:57:58.397492 1179332 cli_runner.go:164] Run: docker network inspect addons-504513
	W1007 11:57:58.410610 1179332 cli_runner.go:211] docker network inspect addons-504513 returned with exit code 1
	I1007 11:57:58.410647 1179332 network_create.go:287] error running [docker network inspect addons-504513]: docker network inspect addons-504513: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-504513 not found
	I1007 11:57:58.410661 1179332 network_create.go:289] output of [docker network inspect addons-504513]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-504513 not found
	
	** /stderr **
	I1007 11:57:58.410771 1179332 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 11:57:58.426157 1179332 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fa98f111c271 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:cf:52:8b:17} reservation:<nil>}
	I1007 11:57:58.426539 1179332 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ddde00}
	I1007 11:57:58.426567 1179332 network_create.go:124] attempt to create docker network addons-504513 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1007 11:57:58.426623 1179332 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-504513 addons-504513
	I1007 11:57:58.499525 1179332 network_create.go:108] docker network addons-504513 192.168.58.0/24 created
	I1007 11:57:58.499557 1179332 kic.go:121] calculated static IP "192.168.58.2" for the "addons-504513" container
	I1007 11:57:58.499628 1179332 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1007 11:57:58.514873 1179332 cli_runner.go:164] Run: docker volume create addons-504513 --label name.minikube.sigs.k8s.io=addons-504513 --label created_by.minikube.sigs.k8s.io=true
	I1007 11:57:58.531175 1179332 oci.go:103] Successfully created a docker volume addons-504513
	I1007 11:57:58.531259 1179332 cli_runner.go:164] Run: docker run --rm --name addons-504513-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-504513 --entrypoint /usr/bin/test -v addons-504513:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib
	I1007 11:57:59.668982 1179332 cli_runner.go:217] Completed: docker run --rm --name addons-504513-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-504513 --entrypoint /usr/bin/test -v addons-504513:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib: (1.13765622s)
	I1007 11:57:59.669013 1179332 oci.go:107] Successfully prepared a docker volume addons-504513
	I1007 11:57:59.669042 1179332 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:57:59.669062 1179332 kic.go:194] Starting extracting preloaded images to volume ...
	I1007 11:57:59.669137 1179332 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19763-1173066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-504513:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir
	I1007 11:58:03.962631 1179332 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19763-1173066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-504513:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir: (4.293440058s)
	I1007 11:58:03.962663 1179332 kic.go:203] duration metric: took 4.293597727s to extract preloaded images to volume ...
	W1007 11:58:03.962816 1179332 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1007 11:58:03.962933 1179332 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1007 11:58:04.018697 1179332 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-504513 --name addons-504513 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-504513 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-504513 --network addons-504513 --ip 192.168.58.2 --volume addons-504513:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122
	I1007 11:58:04.331149 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Running}}
	I1007 11:58:04.362239 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:04.384047 1179332 cli_runner.go:164] Run: docker exec addons-504513 stat /var/lib/dpkg/alternatives/iptables
	I1007 11:58:04.473340 1179332 oci.go:144] the created container "addons-504513" has a running status.
	I1007 11:58:04.473426 1179332 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa...
	I1007 11:58:05.637051 1179332 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1007 11:58:05.656494 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:05.673271 1179332 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1007 11:58:05.673294 1179332 kic_runner.go:114] Args: [docker exec --privileged addons-504513 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1007 11:58:05.730703 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:05.747058 1179332 machine.go:93] provisionDockerMachine start ...
	I1007 11:58:05.747154 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:05.763125 1179332 main.go:141] libmachine: Using SSH client type: native
	I1007 11:58:05.763407 1179332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 34247 <nil> <nil>}
	I1007 11:58:05.763422 1179332 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 11:58:05.895755 1179332 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-504513
	
	I1007 11:58:05.895782 1179332 ubuntu.go:169] provisioning hostname "addons-504513"
	I1007 11:58:05.895858 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:05.912670 1179332 main.go:141] libmachine: Using SSH client type: native
	I1007 11:58:05.912917 1179332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 34247 <nil> <nil>}
	I1007 11:58:05.912934 1179332 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-504513 && echo "addons-504513" | sudo tee /etc/hostname
	I1007 11:58:06.061418 1179332 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-504513
	
	I1007 11:58:06.061582 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:06.078853 1179332 main.go:141] libmachine: Using SSH client type: native
	I1007 11:58:06.079121 1179332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 34247 <nil> <nil>}
	I1007 11:58:06.079138 1179332 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-504513' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-504513/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-504513' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 11:58:06.212160 1179332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 11:58:06.212187 1179332 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19763-1173066/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-1173066/.minikube}
	I1007 11:58:06.212208 1179332 ubuntu.go:177] setting up certificates
	I1007 11:58:06.212218 1179332 provision.go:84] configureAuth start
	I1007 11:58:06.212304 1179332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-504513
	I1007 11:58:06.228721 1179332 provision.go:143] copyHostCerts
	I1007 11:58:06.228808 1179332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.pem (1078 bytes)
	I1007 11:58:06.228928 1179332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-1173066/.minikube/cert.pem (1123 bytes)
	I1007 11:58:06.228991 1179332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-1173066/.minikube/key.pem (1675 bytes)
	I1007 11:58:06.229045 1179332 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca-key.pem org=jenkins.addons-504513 san=[127.0.0.1 192.168.58.2 addons-504513 localhost minikube]
	I1007 11:58:06.520780 1179332 provision.go:177] copyRemoteCerts
	I1007 11:58:06.520878 1179332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 11:58:06.520941 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:06.537293 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:06.633227 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 11:58:06.659334 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 11:58:06.683872 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1007 11:58:06.707099 1179332 provision.go:87] duration metric: took 494.866884ms to configureAuth
	I1007 11:58:06.707126 1179332 ubuntu.go:193] setting minikube options for container-runtime
	I1007 11:58:06.707319 1179332 config.go:182] Loaded profile config "addons-504513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:58:06.707428 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:06.724324 1179332 main.go:141] libmachine: Using SSH client type: native
	I1007 11:58:06.724570 1179332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 34247 <nil> <nil>}
	I1007 11:58:06.724596 1179332 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 11:58:06.962634 1179332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 11:58:06.962662 1179332 machine.go:96] duration metric: took 1.215582058s to provisionDockerMachine
	I1007 11:58:06.962675 1179332 client.go:171] duration metric: took 9.528486941s to LocalClient.Create
	I1007 11:58:06.962688 1179332 start.go:167] duration metric: took 9.528542227s to libmachine.API.Create "addons-504513"
	I1007 11:58:06.962696 1179332 start.go:293] postStartSetup for "addons-504513" (driver="docker")
	I1007 11:58:06.962707 1179332 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 11:58:06.962774 1179332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 11:58:06.962817 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:06.979075 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:07.077773 1179332 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 11:58:07.081200 1179332 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 11:58:07.081284 1179332 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 11:58:07.081300 1179332 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 11:58:07.081308 1179332 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 11:58:07.081319 1179332 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-1173066/.minikube/addons for local assets ...
	I1007 11:58:07.081391 1179332 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-1173066/.minikube/files for local assets ...
	I1007 11:58:07.081416 1179332 start.go:296] duration metric: took 118.714604ms for postStartSetup
	I1007 11:58:07.081750 1179332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-504513
	I1007 11:58:07.098388 1179332 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/config.json ...
	I1007 11:58:07.098680 1179332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 11:58:07.098734 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:07.115160 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:07.204691 1179332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 11:58:07.208888 1179332 start.go:128] duration metric: took 9.777437366s to createHost
	I1007 11:58:07.208915 1179332 start.go:83] releasing machines lock for "addons-504513", held for 9.777576491s
	I1007 11:58:07.209016 1179332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-504513
	I1007 11:58:07.224626 1179332 ssh_runner.go:195] Run: cat /version.json
	I1007 11:58:07.224684 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:07.224746 1179332 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 11:58:07.224824 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:07.242750 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:07.246874 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:07.469577 1179332 ssh_runner.go:195] Run: systemctl --version
	I1007 11:58:07.473829 1179332 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 11:58:07.614302 1179332 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 11:58:07.618575 1179332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 11:58:07.637809 1179332 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1007 11:58:07.637884 1179332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 11:58:07.674599 1179332 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1007 11:58:07.674666 1179332 start.go:495] detecting cgroup driver to use...
	I1007 11:58:07.674714 1179332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 11:58:07.674792 1179332 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 11:58:07.692306 1179332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 11:58:07.703769 1179332 docker.go:217] disabling cri-docker service (if available) ...
	I1007 11:58:07.703892 1179332 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 11:58:07.718727 1179332 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 11:58:07.734353 1179332 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 11:58:07.816646 1179332 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 11:58:07.911740 1179332 docker.go:233] disabling docker service ...
	I1007 11:58:07.911813 1179332 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 11:58:07.933244 1179332 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 11:58:07.945676 1179332 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 11:58:08.030678 1179332 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 11:58:08.126405 1179332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 11:58:08.139249 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 11:58:08.157455 1179332 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 11:58:08.157530 1179332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:58:08.167765 1179332 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 11:58:08.167838 1179332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:58:08.177987 1179332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:58:08.188064 1179332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:58:08.198652 1179332 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 11:58:08.207828 1179332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:58:08.217489 1179332 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:58:08.233630 1179332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:58:08.243254 1179332 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 11:58:08.251854 1179332 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 11:58:08.260111 1179332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:58:08.344806 1179332 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 11:58:08.458458 1179332 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 11:58:08.458543 1179332 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 11:58:08.462407 1179332 start.go:563] Will wait 60s for crictl version
	I1007 11:58:08.462520 1179332 ssh_runner.go:195] Run: which crictl
	I1007 11:58:08.465975 1179332 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 11:58:08.503763 1179332 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1007 11:58:08.503869 1179332 ssh_runner.go:195] Run: crio --version
	I1007 11:58:08.545387 1179332 ssh_runner.go:195] Run: crio --version
	I1007 11:58:08.590991 1179332 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1007 11:58:08.593005 1179332 cli_runner.go:164] Run: docker network inspect addons-504513 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 11:58:08.609477 1179332 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1007 11:58:08.613043 1179332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 11:58:08.623688 1179332 kubeadm.go:883] updating cluster {Name:addons-504513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-504513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 11:58:08.623806 1179332 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:58:08.623870 1179332 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:58:08.694717 1179332 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 11:58:08.694744 1179332 crio.go:433] Images already preloaded, skipping extraction
	I1007 11:58:08.694800 1179332 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:58:08.729925 1179332 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 11:58:08.729951 1179332 cache_images.go:84] Images are preloaded, skipping loading
	I1007 11:58:08.729960 1179332 kubeadm.go:934] updating node { 192.168.58.2 8443 v1.31.1 crio true true} ...
	I1007 11:58:08.730059 1179332 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-504513 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-504513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 11:58:08.730150 1179332 ssh_runner.go:195] Run: crio config
	I1007 11:58:08.778024 1179332 cni.go:84] Creating CNI manager for ""
	I1007 11:58:08.778051 1179332 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 11:58:08.778063 1179332 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 11:58:08.778107 1179332 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-504513 NodeName:addons-504513 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 11:58:08.778268 1179332 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-504513"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 11:58:08.778336 1179332 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 11:58:08.786816 1179332 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 11:58:08.786908 1179332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 11:58:08.795644 1179332 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1007 11:58:08.814187 1179332 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 11:58:08.832777 1179332 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I1007 11:58:08.850687 1179332 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1007 11:58:08.854082 1179332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 11:58:08.864753 1179332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:58:08.946593 1179332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:58:08.960489 1179332 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513 for IP: 192.168.58.2
	I1007 11:58:08.960557 1179332 certs.go:194] generating shared ca certs ...
	I1007 11:58:08.960590 1179332 certs.go:226] acquiring lock for ca certs: {Name:mk2f3e101c3a8a21aa5a00b0d7100cac880b0543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:08.961281 1179332 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.key
	I1007 11:58:09.201198 1179332 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt ...
	I1007 11:58:09.201235 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt: {Name:mkf68ff1cbb7887c29e41ff1a4dab11b8e1f363e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:09.201435 1179332 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.key ...
	I1007 11:58:09.201448 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.key: {Name:mk6ff50bb1e6fdc479ab8c15639619b2dbd94d5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:09.201542 1179332 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.key
	I1007 11:58:09.678906 1179332 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.crt ...
	I1007 11:58:09.678941 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.crt: {Name:mkfbafd89c0d50c6f2f3617fd5a4855be4a25abb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:09.679755 1179332 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.key ...
	I1007 11:58:09.679775 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.key: {Name:mk1479ad37fb89b924eaee5a96c9dc3da37f8f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:09.679899 1179332 certs.go:256] generating profile certs ...
	I1007 11:58:09.679967 1179332 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.key
	I1007 11:58:09.679994 1179332 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt with IP's: []
	I1007 11:58:10.029683 1179332 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt ...
	I1007 11:58:10.029720 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: {Name:mkd5eba9e658416af57e8eabc03f99ae857d36e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:10.029972 1179332 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.key ...
	I1007 11:58:10.029989 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.key: {Name:mk202a91991b7ad436782e803f31a5e28222c04d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:10.030088 1179332 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.key.54c551fb
	I1007 11:58:10.030112 1179332 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.crt.54c551fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.58.2]
	I1007 11:58:10.150783 1179332 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.crt.54c551fb ...
	I1007 11:58:10.150819 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.crt.54c551fb: {Name:mk3413ece515a3e252631a8220d4d1b69f55d166 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:10.151019 1179332 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.key.54c551fb ...
	I1007 11:58:10.151034 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.key.54c551fb: {Name:mk8cd7f62b87ce698adba5921237b172cd0edb1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:10.151537 1179332 certs.go:381] copying /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.crt.54c551fb -> /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.crt
	I1007 11:58:10.151633 1179332 certs.go:385] copying /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.key.54c551fb -> /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.key
	I1007 11:58:10.151695 1179332 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/proxy-client.key
	I1007 11:58:10.151718 1179332 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/proxy-client.crt with IP's: []
	I1007 11:58:10.430235 1179332 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/proxy-client.crt ...
	I1007 11:58:10.430268 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/proxy-client.crt: {Name:mk8f6e148054b88adfad1e5ac523492e177e76ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:10.430459 1179332 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/proxy-client.key ...
	I1007 11:58:10.430473 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/proxy-client.key: {Name:mkf3d4920182453ff1b518808d6eded1892e7abc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:10.430669 1179332 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 11:58:10.430713 1179332 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem (1078 bytes)
	I1007 11:58:10.430742 1179332 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/cert.pem (1123 bytes)
	I1007 11:58:10.430780 1179332 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/key.pem (1675 bytes)
	I1007 11:58:10.431409 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 11:58:10.457429 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 11:58:10.482050 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 11:58:10.505960 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 11:58:10.530020 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 11:58:10.558007 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 11:58:10.586586 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 11:58:10.612192 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 11:58:10.636152 1179332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 11:58:10.660694 1179332 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 11:58:10.678152 1179332 ssh_runner.go:195] Run: openssl version
	I1007 11:58:10.683548 1179332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 11:58:10.693067 1179332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:58:10.696527 1179332 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:58 /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:58:10.696593 1179332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:58:10.703526 1179332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 11:58:10.712976 1179332 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 11:58:10.716170 1179332 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 11:58:10.716222 1179332 kubeadm.go:392] StartCluster: {Name:addons-504513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-504513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:58:10.716341 1179332 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 11:58:10.716401 1179332 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 11:58:10.752238 1179332 cri.go:89] found id: ""
	I1007 11:58:10.752337 1179332 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 11:58:10.761373 1179332 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 11:58:10.770277 1179332 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1007 11:58:10.770366 1179332 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 11:58:10.779424 1179332 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 11:58:10.779447 1179332 kubeadm.go:157] found existing configuration files:
	
	I1007 11:58:10.779502 1179332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 11:58:10.788075 1179332 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 11:58:10.788167 1179332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 11:58:10.797375 1179332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 11:58:10.806361 1179332 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 11:58:10.806452 1179332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 11:58:10.815574 1179332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 11:58:10.824601 1179332 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 11:58:10.824696 1179332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 11:58:10.833579 1179332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 11:58:10.842631 1179332 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 11:58:10.842738 1179332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 11:58:10.851412 1179332 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1007 11:58:10.894841 1179332 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 11:58:10.894952 1179332 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 11:58:10.914693 1179332 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1007 11:58:10.914816 1179332 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1007 11:58:10.914878 1179332 kubeadm.go:310] OS: Linux
	I1007 11:58:10.914957 1179332 kubeadm.go:310] CGROUPS_CPU: enabled
	I1007 11:58:10.915033 1179332 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1007 11:58:10.915111 1179332 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1007 11:58:10.915184 1179332 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1007 11:58:10.915264 1179332 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1007 11:58:10.915346 1179332 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1007 11:58:10.915470 1179332 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1007 11:58:10.915564 1179332 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1007 11:58:10.915643 1179332 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1007 11:58:10.974347 1179332 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 11:58:10.974464 1179332 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 11:58:10.974562 1179332 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 11:58:10.984588 1179332 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 11:58:10.989146 1179332 out.go:235]   - Generating certificates and keys ...
	I1007 11:58:10.989244 1179332 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 11:58:10.989316 1179332 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 11:58:11.617333 1179332 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 11:58:12.296673 1179332 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 11:58:12.795585 1179332 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 11:58:13.353531 1179332 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 11:58:13.521063 1179332 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 11:58:13.521298 1179332 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-504513 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1007 11:58:14.162177 1179332 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 11:58:14.162504 1179332 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-504513 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1007 11:58:15.190048 1179332 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 11:58:16.460269 1179332 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 11:58:16.754815 1179332 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 11:58:16.755096 1179332 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 11:58:17.006011 1179332 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 11:58:17.201204 1179332 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 11:58:17.552188 1179332 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 11:58:18.099625 1179332 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 11:58:18.317963 1179332 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 11:58:18.318635 1179332 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 11:58:18.321602 1179332 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 11:58:18.323748 1179332 out.go:235]   - Booting up control plane ...
	I1007 11:58:18.323855 1179332 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 11:58:18.323936 1179332 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 11:58:18.325976 1179332 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 11:58:18.336333 1179332 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 11:58:18.341998 1179332 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 11:58:18.342054 1179332 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 11:58:18.435335 1179332 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 11:58:18.435483 1179332 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 11:58:19.936867 1179332 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501687294s
	I1007 11:58:19.936955 1179332 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 11:58:25.439133 1179332 kubeadm.go:310] [api-check] The API server is healthy after 5.502264659s
	I1007 11:58:25.458306 1179332 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 11:58:25.472488 1179332 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 11:58:25.495394 1179332 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 11:58:25.495589 1179332 kubeadm.go:310] [mark-control-plane] Marking the node addons-504513 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 11:58:25.505568 1179332 kubeadm.go:310] [bootstrap-token] Using token: uqi1ty.cqcawz3fon0l6gz3
	I1007 11:58:25.507364 1179332 out.go:235]   - Configuring RBAC rules ...
	I1007 11:58:25.507504 1179332 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 11:58:25.513092 1179332 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 11:58:25.520552 1179332 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 11:58:25.525754 1179332 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 11:58:25.529302 1179332 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 11:58:25.533941 1179332 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 11:58:25.847753 1179332 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 11:58:26.270242 1179332 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 11:58:26.849575 1179332 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 11:58:26.849609 1179332 kubeadm.go:310] 
	I1007 11:58:26.849734 1179332 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 11:58:26.849743 1179332 kubeadm.go:310] 
	I1007 11:58:26.849828 1179332 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 11:58:26.849835 1179332 kubeadm.go:310] 
	I1007 11:58:26.849860 1179332 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 11:58:26.849942 1179332 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 11:58:26.850010 1179332 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 11:58:26.850021 1179332 kubeadm.go:310] 
	I1007 11:58:26.850099 1179332 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 11:58:26.850109 1179332 kubeadm.go:310] 
	I1007 11:58:26.850167 1179332 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 11:58:26.850174 1179332 kubeadm.go:310] 
	I1007 11:58:26.850226 1179332 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 11:58:26.850300 1179332 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 11:58:26.850368 1179332 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 11:58:26.850372 1179332 kubeadm.go:310] 
	I1007 11:58:26.850455 1179332 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 11:58:26.850531 1179332 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 11:58:26.850536 1179332 kubeadm.go:310] 
	I1007 11:58:26.850619 1179332 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uqi1ty.cqcawz3fon0l6gz3 \
	I1007 11:58:26.850725 1179332 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7db072a6d6df4839e4a7b596f4b08ad30308739d831d243298f5bd971a907272 \
	I1007 11:58:26.850749 1179332 kubeadm.go:310] 	--control-plane 
	I1007 11:58:26.850753 1179332 kubeadm.go:310] 
	I1007 11:58:26.850837 1179332 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 11:58:26.850842 1179332 kubeadm.go:310] 
	I1007 11:58:26.850925 1179332 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uqi1ty.cqcawz3fon0l6gz3 \
	I1007 11:58:26.851027 1179332 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7db072a6d6df4839e4a7b596f4b08ad30308739d831d243298f5bd971a907272 
	I1007 11:58:26.851908 1179332 kubeadm.go:310] W1007 11:58:10.891387    1174 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 11:58:26.852216 1179332 kubeadm.go:310] W1007 11:58:10.892285    1174 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 11:58:26.852440 1179332 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1007 11:58:26.852554 1179332 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 11:58:26.852581 1179332 cni.go:84] Creating CNI manager for ""
	I1007 11:58:26.852589 1179332 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 11:58:26.855440 1179332 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 11:58:26.857146 1179332 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 11:58:26.861213 1179332 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 11:58:26.861235 1179332 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 11:58:26.881581 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 11:58:27.181388 1179332 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 11:58:27.181537 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:58:27.181618 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-504513 minikube.k8s.io/updated_at=2024_10_07T11_58_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=addons-504513 minikube.k8s.io/primary=true
	I1007 11:58:27.337392 1179332 ops.go:34] apiserver oom_adj: -16
	I1007 11:58:27.337531 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:58:27.838553 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:58:28.337723 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:58:28.838592 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:58:29.337822 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:58:29.837702 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:58:30.338326 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:58:30.838205 1179332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:58:30.929633 1179332 kubeadm.go:1113] duration metric: took 3.748139924s to wait for elevateKubeSystemPrivileges
	I1007 11:58:30.929672 1179332 kubeadm.go:394] duration metric: took 20.213455358s to StartCluster
	I1007 11:58:30.929689 1179332 settings.go:142] acquiring lock: {Name:mk942b9f169f258985b7aaeeac5d38deaf461542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:30.929807 1179332 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19763-1173066/kubeconfig
	I1007 11:58:30.930181 1179332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/kubeconfig: {Name:mkfc1e9493ee5c91f2837c31acce39f4935ee46c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:58:30.930772 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 11:58:30.930787 1179332 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 11:58:30.931071 1179332 config.go:182] Loaded profile config "addons-504513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:58:30.931118 1179332 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1007 11:58:30.931234 1179332 addons.go:69] Setting yakd=true in profile "addons-504513"
	I1007 11:58:30.931250 1179332 addons.go:234] Setting addon yakd=true in "addons-504513"
	I1007 11:58:30.931290 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:30.931843 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.932489 1179332 addons.go:69] Setting cloud-spanner=true in profile "addons-504513"
	I1007 11:58:30.932514 1179332 addons.go:234] Setting addon cloud-spanner=true in "addons-504513"
	I1007 11:58:30.932548 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:30.933054 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.935907 1179332 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-504513"
	I1007 11:58:30.936054 1179332 out.go:177] * Verifying Kubernetes components...
	I1007 11:58:30.936142 1179332 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-504513"
	I1007 11:58:30.936350 1179332 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-504513"
	I1007 11:58:30.936365 1179332 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-504513"
	I1007 11:58:30.936385 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:30.936838 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.936192 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:30.938054 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.940561 1179332 addons.go:69] Setting default-storageclass=true in profile "addons-504513"
	I1007 11:58:30.940586 1179332 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-504513"
	I1007 11:58:30.940865 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.941170 1179332 addons.go:69] Setting registry=true in profile "addons-504513"
	I1007 11:58:30.941206 1179332 addons.go:234] Setting addon registry=true in "addons-504513"
	I1007 11:58:30.941270 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:30.941794 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.952371 1179332 addons.go:69] Setting gcp-auth=true in profile "addons-504513"
	I1007 11:58:30.958730 1179332 mustload.go:65] Loading cluster: addons-504513
	I1007 11:58:30.958994 1179332 config.go:182] Loaded profile config "addons-504513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:58:30.959313 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.960325 1179332 addons.go:69] Setting ingress=true in profile "addons-504513"
	I1007 11:58:30.960375 1179332 addons.go:234] Setting addon ingress=true in "addons-504513"
	I1007 11:58:30.960431 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:30.961019 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.952452 1179332 addons.go:69] Setting storage-provisioner=true in profile "addons-504513"
	I1007 11:58:30.992491 1179332 addons.go:234] Setting addon storage-provisioner=true in "addons-504513"
	I1007 11:58:30.992559 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:30.993078 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.993272 1179332 addons.go:69] Setting ingress-dns=true in profile "addons-504513"
	I1007 11:58:30.993309 1179332 addons.go:234] Setting addon ingress-dns=true in "addons-504513"
	I1007 11:58:30.993365 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:30.993837 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.952462 1179332 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-504513"
	I1007 11:58:31.011896 1179332 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-504513"
	I1007 11:58:31.012302 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:31.015252 1179332 addons.go:69] Setting inspektor-gadget=true in profile "addons-504513"
	I1007 11:58:31.015344 1179332 addons.go:234] Setting addon inspektor-gadget=true in "addons-504513"
	I1007 11:58:31.015414 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:31.015936 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.952471 1179332 addons.go:69] Setting volcano=true in profile "addons-504513"
	I1007 11:58:31.025798 1179332 addons.go:234] Setting addon volcano=true in "addons-504513"
	I1007 11:58:31.025873 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:31.026488 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:31.050036 1179332 addons.go:69] Setting metrics-server=true in profile "addons-504513"
	I1007 11:58:31.050110 1179332 addons.go:234] Setting addon metrics-server=true in "addons-504513"
	I1007 11:58:31.050178 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:31.050662 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.952477 1179332 addons.go:69] Setting volumesnapshots=true in profile "addons-504513"
	I1007 11:58:31.050959 1179332 addons.go:234] Setting addon volumesnapshots=true in "addons-504513"
	I1007 11:58:31.051005 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:31.052022 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:30.952590 1179332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:58:31.119348 1179332 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1007 11:58:31.121605 1179332 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1007 11:58:31.121721 1179332 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1007 11:58:31.123791 1179332 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1007 11:58:31.124038 1179332 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 11:58:31.124052 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1007 11:58:31.124125 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.129976 1179332 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1007 11:58:31.132493 1179332 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1007 11:58:31.134665 1179332 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1007 11:58:31.137228 1179332 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1007 11:58:31.139194 1179332 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1007 11:58:31.139296 1179332 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1007 11:58:31.139385 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.142375 1179332 addons.go:234] Setting addon default-storageclass=true in "addons-504513"
	I1007 11:58:31.142416 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:31.142811 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:31.164531 1179332 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1007 11:58:31.166935 1179332 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1007 11:58:31.166959 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1007 11:58:31.167024 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.189500 1179332 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1007 11:58:31.191821 1179332 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1007 11:58:31.193604 1179332 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1007 11:58:31.193644 1179332 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1007 11:58:31.193710 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.230841 1179332 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 11:58:31.231107 1179332 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1007 11:58:31.248588 1179332 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 11:58:31.248771 1179332 host.go:66] Checking if "addons-504513" exists ...
	W1007 11:58:31.254306 1179332 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1007 11:58:31.254463 1179332 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1007 11:58:31.254630 1179332 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 11:58:31.254662 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 11:58:31.254766 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.256221 1179332 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1007 11:58:31.256241 1179332 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1007 11:58:31.256357 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.273582 1179332 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1007 11:58:31.273871 1179332 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1007 11:58:31.276132 1179332 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1007 11:58:31.276154 1179332 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1007 11:58:31.276218 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.276555 1179332 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 11:58:31.276608 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1007 11:58:31.276689 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.290640 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 11:58:31.295851 1179332 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 11:58:31.308046 1179332 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 11:58:31.308067 1179332 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 11:58:31.308130 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.296871 1179332 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-504513"
	I1007 11:58:31.308394 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:31.308815 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:31.319771 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.324314 1179332 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1007 11:58:31.326068 1179332 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 11:58:31.326089 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1007 11:58:31.326162 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.339639 1179332 out.go:177]   - Using image docker.io/registry:2.8.3
	I1007 11:58:31.344117 1179332 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1007 11:58:31.344154 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1007 11:58:31.344221 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.360374 1179332 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1007 11:58:31.367567 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.368396 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.369160 1179332 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 11:58:31.369174 1179332 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 11:58:31.369242 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.390740 1179332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:58:31.402771 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.404526 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.430160 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.434057 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.446760 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.476098 1179332 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1007 11:58:31.480363 1179332 out.go:177]   - Using image docker.io/busybox:stable
	I1007 11:58:31.482347 1179332 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 11:58:31.482370 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1007 11:58:31.482436 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:31.492433 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.506821 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.508579 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	W1007 11:58:31.510433 1179332 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1007 11:58:31.510463 1179332 retry.go:31] will retry after 170.543342ms: ssh: handshake failed: EOF
	I1007 11:58:31.525998 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.543240 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:31.742399 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 11:58:31.811567 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1007 11:58:31.832357 1179332 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1007 11:58:31.832382 1179332 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1007 11:58:31.876508 1179332 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1007 11:58:31.876540 1179332 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1007 11:58:31.880167 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 11:58:31.891634 1179332 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1007 11:58:31.891662 1179332 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1007 11:58:31.926486 1179332 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1007 11:58:31.926513 1179332 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1007 11:58:31.935767 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 11:58:31.952498 1179332 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 11:58:31.952523 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1007 11:58:31.955318 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 11:58:31.975007 1179332 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1007 11:58:31.975032 1179332 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1007 11:58:31.999307 1179332 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1007 11:58:31.999335 1179332 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1007 11:58:32.002701 1179332 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1007 11:58:32.002731 1179332 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1007 11:58:32.005384 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 11:58:32.080507 1179332 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1007 11:58:32.080531 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1007 11:58:32.106726 1179332 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1007 11:58:32.106753 1179332 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1007 11:58:32.111469 1179332 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1007 11:58:32.111496 1179332 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1007 11:58:32.118064 1179332 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 11:58:32.118090 1179332 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 11:58:32.134210 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 11:58:32.142893 1179332 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1007 11:58:32.142918 1179332 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1007 11:58:32.156865 1179332 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1007 11:58:32.156892 1179332 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1007 11:58:32.209179 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1007 11:58:32.267235 1179332 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1007 11:58:32.267262 1179332 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1007 11:58:32.273457 1179332 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 11:58:32.273482 1179332 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 11:58:32.276669 1179332 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1007 11:58:32.276694 1179332 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1007 11:58:32.291035 1179332 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1007 11:58:32.291058 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1007 11:58:32.379596 1179332 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1007 11:58:32.379623 1179332 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1007 11:58:32.413887 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1007 11:58:32.435105 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 11:58:32.454246 1179332 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1007 11:58:32.454275 1179332 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1007 11:58:32.485200 1179332 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1007 11:58:32.485226 1179332 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1007 11:58:32.557627 1179332 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 11:58:32.557649 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1007 11:58:32.611259 1179332 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1007 11:58:32.611297 1179332 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1007 11:58:32.663666 1179332 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1007 11:58:32.663692 1179332 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1007 11:58:32.721722 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 11:58:32.783073 1179332 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1007 11:58:32.783099 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1007 11:58:32.794202 1179332 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1007 11:58:32.794234 1179332 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1007 11:58:32.865301 1179332 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1007 11:58:32.865326 1179332 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1007 11:58:32.883450 1179332 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1007 11:58:32.883476 1179332 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1007 11:58:32.948003 1179332 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1007 11:58:32.948030 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1007 11:58:32.953231 1179332 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 11:58:32.953304 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1007 11:58:33.047666 1179332 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1007 11:58:33.047753 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1007 11:58:33.068957 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 11:58:33.142163 1179332 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 11:58:33.142236 1179332 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1007 11:58:33.268917 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 11:58:33.314257 1179332 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.023582098s)
	I1007 11:58:33.314337 1179332 start.go:971] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1007 11:58:33.315466 1179332 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.924701892s)
	I1007 11:58:33.316597 1179332 node_ready.go:35] waiting up to 6m0s for node "addons-504513" to be "Ready" ...
	I1007 11:58:34.081665 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.33922698s)
	I1007 11:58:34.400788 1179332 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-504513" context rescaled to 1 replicas
	I1007 11:58:35.555516 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:35.592566 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.78096353s)
	I1007 11:58:35.887847 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.00764492s)
	I1007 11:58:36.326812 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.39100974s)
	I1007 11:58:36.327085 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.371742002s)
	I1007 11:58:36.355678 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.350256217s)
	W1007 11:58:36.441851 1179332 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1007 11:58:37.293472 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.15922741s)
	I1007 11:58:37.293663 1179332 addons.go:475] Verifying addon ingress=true in "addons-504513"
	I1007 11:58:37.293776 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.858647928s)
	I1007 11:58:37.293824 1179332 addons.go:475] Verifying addon metrics-server=true in "addons-504513"
	I1007 11:58:37.293544 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.879627558s)
	I1007 11:58:37.293499 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.084205847s)
	I1007 11:58:37.294259 1179332 addons.go:475] Verifying addon registry=true in "addons-504513"
	I1007 11:58:37.294630 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.572870949s)
	W1007 11:58:37.295655 1179332 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 11:58:37.295681 1179332 retry.go:31] will retry after 297.35536ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 11:58:37.294717 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.225655053s)
	I1007 11:58:37.296285 1179332 out.go:177] * Verifying ingress addon...
	I1007 11:58:37.297738 1179332 out.go:177] * Verifying registry addon...
	I1007 11:58:37.297806 1179332 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-504513 service yakd-dashboard -n yakd-dashboard
	
	I1007 11:58:37.300221 1179332 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1007 11:58:37.302832 1179332 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1007 11:58:37.314673 1179332 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1007 11:58:37.314763 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:37.333394 1179332 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 11:58:37.333415 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:37.587779 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.318743695s)
	I1007 11:58:37.587866 1179332 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-504513"
	I1007 11:58:37.589694 1179332 out.go:177] * Verifying csi-hostpath-driver addon...
	I1007 11:58:37.592384 1179332 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1007 11:58:37.593422 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 11:58:37.611585 1179332 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 11:58:37.611608 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:37.811161 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:37.812196 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:37.821342 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:38.097315 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:38.306192 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:38.306472 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:38.597016 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:38.806112 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:38.807082 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:39.096603 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:39.305595 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:39.307371 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:39.596798 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:39.805719 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:39.807022 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:39.821505 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:40.122417 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:40.305597 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:40.310469 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:40.572647 1179332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.979185223s)
	I1007 11:58:40.597230 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:40.806848 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:40.809193 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:41.097097 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:41.157039 1179332 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1007 11:58:41.157169 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:41.180600 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:41.306019 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:41.307330 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:41.311356 1179332 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1007 11:58:41.366331 1179332 addons.go:234] Setting addon gcp-auth=true in "addons-504513"
	I1007 11:58:41.366408 1179332 host.go:66] Checking if "addons-504513" exists ...
	I1007 11:58:41.366914 1179332 cli_runner.go:164] Run: docker container inspect addons-504513 --format={{.State.Status}}
	I1007 11:58:41.400961 1179332 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1007 11:58:41.401017 1179332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504513
	I1007 11:58:41.422326 1179332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/addons-504513/id_rsa Username:docker}
	I1007 11:58:41.520733 1179332 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1007 11:58:41.522694 1179332 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 11:58:41.524833 1179332 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1007 11:58:41.524856 1179332 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1007 11:58:41.568803 1179332 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1007 11:58:41.568831 1179332 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1007 11:58:41.588730 1179332 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 11:58:41.588755 1179332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1007 11:58:41.596484 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:41.612136 1179332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 11:58:41.809057 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:41.809755 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:41.835134 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:42.098130 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:42.330219 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:42.331648 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:42.345920 1179332 addons.go:475] Verifying addon gcp-auth=true in "addons-504513"
	I1007 11:58:42.347766 1179332 out.go:177] * Verifying gcp-auth addon...
	I1007 11:58:42.350377 1179332 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1007 11:58:42.364423 1179332 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1007 11:58:42.364504 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:42.597297 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:42.807656 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:42.808496 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:42.854519 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:43.096353 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:43.304237 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:43.306394 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:43.355366 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:43.597490 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:43.804704 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:43.809388 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:43.854264 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:44.096508 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:44.305012 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:44.306679 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:44.320528 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:44.354658 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:44.595646 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:44.805771 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:44.807217 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:44.853535 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:45.096789 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:45.307975 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:45.308779 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:45.354438 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:45.595963 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:45.805020 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:45.807439 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:45.853644 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:46.095782 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:46.304570 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:46.305999 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:46.354055 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:46.596169 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:46.806446 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:46.807314 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:46.820576 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:46.854222 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:47.096143 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:47.304203 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:47.306596 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:47.353577 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:47.596984 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:47.805258 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:47.806744 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:47.854092 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:48.096297 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:48.304315 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:48.305969 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:48.354665 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:48.595903 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:48.804081 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:48.805813 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:48.854202 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:49.096345 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:49.304466 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:49.305875 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:49.319524 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:49.354050 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:49.596769 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:49.803947 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:49.806402 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:49.854077 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:50.096783 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:50.304757 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:50.307088 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:50.353772 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:50.595883 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:50.803996 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:50.806217 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:50.854045 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:51.096322 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:51.304558 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:51.307071 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:51.319909 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:51.353504 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:51.596735 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:51.806034 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:51.807394 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:51.854213 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:52.096363 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:52.304779 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:52.306125 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:52.353761 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:52.596319 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:52.803954 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:52.806396 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:52.854322 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:53.096441 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:53.304439 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:53.305799 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:53.320768 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:53.353895 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:53.596301 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:53.804558 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:53.805987 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:53.853831 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:54.096311 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:54.304574 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:54.305882 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:54.367534 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:54.596759 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:54.804702 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:54.806242 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:54.853734 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:55.096873 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:55.305409 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:55.306999 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:55.354523 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:55.596420 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:55.804955 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:55.806456 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:55.820388 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:55.854242 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:56.095732 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:56.306424 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:56.306070 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:56.354044 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:56.596378 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:56.804439 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:56.807929 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:56.854289 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:57.096042 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:57.304586 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:57.307116 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:57.354087 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:57.596479 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:57.804158 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:57.806665 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:57.853741 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:58.096416 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:58.304724 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:58.306481 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:58.320645 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:58:58.353697 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:58.596068 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:58.805098 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:58.806458 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:58.853462 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:59.096436 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:59.304476 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:59.305585 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:59.353851 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:58:59.596424 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:58:59.804432 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:58:59.806866 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:58:59.853540 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:00.104113 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:00.325407 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:00.329695 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:00.336672 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:59:00.363197 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:00.596428 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:00.805859 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:00.807279 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:00.853627 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:01.095787 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:01.304027 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:01.305580 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:01.353954 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:01.596674 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:01.805162 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:01.806067 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:01.854074 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:02.096626 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:02.304926 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:02.307258 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:02.354205 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:02.595689 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:02.804738 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:02.806137 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:02.820077 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:59:02.854135 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:03.097340 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:03.304783 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:03.306225 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:03.354955 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:03.595578 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:03.804720 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:03.806580 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:03.854018 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:04.096915 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:04.304356 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:04.306567 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:04.353944 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:04.596360 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:04.805577 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:04.807641 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:04.820447 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:59:04.853509 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:05.096411 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:05.306040 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:05.307413 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:05.354356 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:05.595625 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:05.806757 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:05.806799 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:05.854385 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:06.096585 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:06.304358 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:06.305676 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:06.353570 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:06.595479 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:06.805111 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:06.806566 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:06.820564 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:59:06.853542 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:07.096123 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:07.304537 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:07.306952 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:07.354309 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:07.595755 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:07.804767 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:07.806297 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:07.854314 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:08.096620 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:08.305270 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:08.306440 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:08.354597 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:08.595627 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:08.807184 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:08.808269 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:08.853783 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:09.096587 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:09.304659 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:09.306196 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:09.320190 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:59:09.354277 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:09.596560 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:09.805116 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:09.807312 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:09.853886 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:10.096981 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:10.305994 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:10.307383 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:10.353964 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:10.596500 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:10.805764 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:10.806525 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:10.854364 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:11.096060 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:11.305506 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:11.306916 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:11.354698 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:11.595692 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:11.804946 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:11.806602 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:11.820464 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:59:11.853692 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:12.096513 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:12.304857 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:12.306265 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:12.353832 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:12.595908 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:12.805006 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:12.806411 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:12.854146 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:13.096225 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:13.304058 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:13.306811 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:13.354295 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:13.596547 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:13.805255 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:13.806741 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:13.820545 1179332 node_ready.go:53] node "addons-504513" has status "Ready":"False"
	I1007 11:59:13.853775 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:14.096408 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:14.304227 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:14.306549 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:14.354522 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:14.596714 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:14.804279 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:14.805795 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:14.853881 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:15.096497 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:15.304235 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:15.305768 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:15.354481 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:15.596582 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:15.805639 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:15.806857 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:15.853212 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:16.109060 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:16.329660 1179332 node_ready.go:49] node "addons-504513" has status "Ready":"True"
	I1007 11:59:16.329687 1179332 node_ready.go:38] duration metric: took 43.013030999s for node "addons-504513" to be "Ready" ...
	I1007 11:59:16.329699 1179332 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 11:59:16.368573 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:16.381154 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:16.383174 1179332 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 11:59:16.383203 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:16.430331 1179332 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g27sx" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:16.608659 1179332 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 11:59:16.608686 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:16.828405 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:16.829336 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:16.905802 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:17.101498 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:17.307963 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:17.310654 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:17.405036 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:17.597955 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:17.808972 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:17.810771 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:17.909811 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:17.937175 1179332 pod_ready.go:93] pod "coredns-7c65d6cfc9-g27sx" in "kube-system" namespace has status "Ready":"True"
	I1007 11:59:17.937252 1179332 pod_ready.go:82] duration metric: took 1.506886583s for pod "coredns-7c65d6cfc9-g27sx" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:17.937341 1179332 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-504513" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:17.944265 1179332 pod_ready.go:93] pod "etcd-addons-504513" in "kube-system" namespace has status "Ready":"True"
	I1007 11:59:17.944338 1179332 pod_ready.go:82] duration metric: took 6.964823ms for pod "etcd-addons-504513" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:17.944370 1179332 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-504513" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:17.953927 1179332 pod_ready.go:93] pod "kube-apiserver-addons-504513" in "kube-system" namespace has status "Ready":"True"
	I1007 11:59:17.954008 1179332 pod_ready.go:82] duration metric: took 9.615342ms for pod "kube-apiserver-addons-504513" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:17.954039 1179332 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-504513" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:17.963066 1179332 pod_ready.go:93] pod "kube-controller-manager-addons-504513" in "kube-system" namespace has status "Ready":"True"
	I1007 11:59:17.963145 1179332 pod_ready.go:82] duration metric: took 9.083513ms for pod "kube-controller-manager-addons-504513" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:17.963187 1179332 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j4dwf" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:17.972188 1179332 pod_ready.go:93] pod "kube-proxy-j4dwf" in "kube-system" namespace has status "Ready":"True"
	I1007 11:59:17.972292 1179332 pod_ready.go:82] duration metric: took 9.066414ms for pod "kube-proxy-j4dwf" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:17.972322 1179332 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-504513" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:18.100803 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:18.305138 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:18.306945 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:18.334025 1179332 pod_ready.go:93] pod "kube-scheduler-addons-504513" in "kube-system" namespace has status "Ready":"True"
	I1007 11:59:18.334094 1179332 pod_ready.go:82] duration metric: took 361.719532ms for pod "kube-scheduler-addons-504513" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:18.334124 1179332 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace to be "Ready" ...
	I1007 11:59:18.353955 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:18.601572 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:18.805701 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:18.807732 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:18.854024 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:19.098453 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:19.306975 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:19.307829 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:19.405855 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:19.598060 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:19.806097 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:19.806773 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:19.853892 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:20.097683 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:20.305307 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:20.307745 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:20.340388 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:20.353482 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:20.597548 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:20.805503 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:20.806616 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:20.854067 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:21.098112 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:21.305280 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:21.308782 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:21.353403 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:21.597886 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:21.805972 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:21.807858 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:21.854404 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:22.097626 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:22.306599 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:22.307677 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:22.341091 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:22.353793 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:22.597475 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:22.806207 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:22.807180 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:22.854438 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:23.098366 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:23.305524 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:23.308615 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:23.354280 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:23.597255 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:23.805798 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:23.808173 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:23.874900 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:24.099403 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:24.307628 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:24.325939 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:24.349546 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:24.354353 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:24.596872 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:24.806458 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:24.809789 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:24.856398 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:25.098007 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:25.333845 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:25.338244 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:25.367514 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:25.597308 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:25.806666 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:25.810582 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:25.856465 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:26.097976 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:26.306137 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:26.309228 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:26.354908 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:26.596482 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:26.807728 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:26.808653 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:26.841548 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:26.906082 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:27.099648 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:27.304729 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:27.307767 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:27.353481 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:27.597464 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:27.807291 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:27.807671 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:27.853774 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:28.097330 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:28.305866 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:28.307882 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:28.354050 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:28.598095 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:28.805904 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:28.808719 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:28.854376 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:29.097804 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:29.305332 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:29.307990 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:29.341233 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:29.404717 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:29.597324 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:29.807962 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:29.908429 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:29.908761 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:30.098286 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:30.306498 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:30.307873 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:30.354621 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:30.597589 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:30.805090 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:30.807070 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:30.853911 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:31.101339 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:31.306124 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:31.309521 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:31.355514 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:31.598380 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:31.804969 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:31.807526 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:31.844061 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:31.857648 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:32.097825 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:32.305500 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:32.308813 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:32.358530 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:32.597693 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:32.815480 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:32.818006 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:32.853624 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:33.097734 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:33.306620 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:33.307350 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:33.353970 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:33.599093 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:33.808834 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:33.809351 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:33.853742 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:34.098729 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:34.307614 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:34.310776 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:34.343763 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:34.354255 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:34.598521 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:34.806015 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:34.808625 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:34.854672 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:35.105176 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:35.313916 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:35.315378 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:35.412687 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:35.597882 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:35.806576 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:35.806784 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:35.854913 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:36.097213 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:36.304891 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:36.306952 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:36.344390 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:36.354517 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:36.598191 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:36.806798 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:36.807799 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:36.853873 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:37.097343 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:37.305533 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:37.308315 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:37.354539 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:37.598610 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:37.819078 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:37.830116 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:37.911117 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:38.098195 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:38.309065 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:38.310300 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:38.354316 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:38.598877 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:38.804775 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:38.806605 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:38.844506 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:38.853982 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:39.103636 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:39.308732 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:39.310253 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:39.355960 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:39.601829 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:39.807614 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:39.813126 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:39.858263 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:40.099145 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:40.306285 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:40.308632 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:40.354197 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:40.597903 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:40.807440 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:40.809176 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:40.854620 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:41.097510 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:41.313520 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:41.314842 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:41.361953 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:41.377518 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:41.598129 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:41.843526 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:41.847466 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:41.855618 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:42.105720 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:42.305994 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:42.309609 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:42.354653 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:42.598666 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:42.808504 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:42.810116 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:42.854370 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:43.097994 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:43.307105 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:43.310168 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:43.354962 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:43.608692 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:43.806712 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:43.809356 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:43.845893 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:43.853868 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:44.097786 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:44.305991 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:44.306913 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:44.354060 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:44.599509 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:44.807114 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:44.807416 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:44.854364 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:45.104501 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:45.309360 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:45.314305 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:45.355099 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:45.598406 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:45.806497 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:45.809216 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:45.855712 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:46.098288 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:46.305212 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:46.306616 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:46.339983 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:46.354806 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:46.597515 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:46.811396 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:46.813450 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:46.854369 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:47.098447 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:47.311173 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:47.311242 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:47.355599 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:47.605734 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:47.808708 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:47.810086 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:47.858088 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:48.102608 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:48.306261 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:48.311629 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:48.348948 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:48.360080 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:48.600558 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:48.807729 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:48.809783 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:48.854902 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:49.100040 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:49.306838 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:49.308893 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:49.354246 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:49.597221 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:49.805247 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:49.807724 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:49.854649 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:50.098770 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:50.305655 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:50.308697 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:50.354114 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:50.598787 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:50.807577 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:50.810135 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:50.841013 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:50.905774 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:51.097337 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:51.305316 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:51.306798 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:51.354140 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:51.597517 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:51.805341 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:51.807012 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:51.853701 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:52.096993 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:52.305259 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:52.307220 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:52.353971 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:52.597658 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:52.806983 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:52.807599 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:52.844702 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:52.854151 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:53.097563 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:53.305455 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:53.309355 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:53.354051 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:53.600008 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:53.805785 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:53.807015 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:53.854111 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:54.097249 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:54.304510 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:54.306414 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:54.357063 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:54.602760 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:54.806660 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:54.807773 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:59:54.853840 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:55.097675 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:55.304890 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:55.306999 1179332 kapi.go:107] duration metric: took 1m18.004166252s to wait for kubernetes.io/minikube-addons=registry ...
	I1007 11:59:55.340917 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:55.354017 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:55.598215 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:55.804986 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:55.854244 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:56.098431 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:56.305427 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:56.354159 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:56.600767 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:56.811443 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:56.862427 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:57.102668 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:57.305837 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:57.345611 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:57.355390 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:57.598734 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:57.805156 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:57.854796 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:58.099847 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:58.305517 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:58.359560 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:58.598857 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:58.805468 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:58.854475 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:59.098206 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:59.307199 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:59.354076 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:59:59.598893 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:59:59.831817 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:59:59.851552 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 11:59:59.923200 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:00.105985 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:00.428918 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:00.430459 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:00.721068 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:00.817839 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:00.881310 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:01.134010 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:01.313756 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:01.355594 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:01.600772 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:01.806813 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:01.855897 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:02.104648 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:02.305878 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:02.345882 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:02.355700 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:02.600447 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:02.805401 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:02.854250 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:03.099516 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:03.305221 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:03.354854 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:03.600596 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:03.808322 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:03.858236 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:04.106491 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:04.308134 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:04.355414 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:04.598817 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:04.804849 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:04.841234 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:04.854179 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:05.098373 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:05.305914 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:05.356235 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:05.597320 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:05.804883 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:05.853549 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:06.098731 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:06.305164 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:06.355088 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:06.597237 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:06.806198 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:06.854120 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:07.098511 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:07.308850 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:07.341090 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:07.407654 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:07.597521 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:07.805423 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:07.854933 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:08.097079 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:08.305158 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:08.354247 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:08.598106 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:08.806207 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:08.854306 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:09.102629 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:09.305638 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:09.341234 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:09.354606 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:09.597963 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:09.806211 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:09.853961 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:10.099590 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:10.305218 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:10.355568 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:10.597674 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:10.806006 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:10.854483 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:11.098648 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:11.306155 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:11.341367 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:11.354951 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:11.601326 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:11.806265 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:11.855036 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:12.099419 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:12.306952 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:12.354951 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:12.597617 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:12.805504 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:12.854270 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:13.097658 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:13.305465 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:13.341601 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:13.358427 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:13.597546 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:13.807591 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:13.854606 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:14.098580 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:14.306620 1179332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:00:14.354200 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:14.605246 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:14.804372 1179332 kapi.go:107] duration metric: took 1m37.504149213s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1007 12:00:14.854382 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:15.097402 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:15.349074 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:15.355551 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:15.597607 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:15.854124 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:16.099037 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:16.355894 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:16.599709 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:16.855501 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:17.099114 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:17.355680 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:00:17.598191 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:17.840929 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:17.854801 1179332 kapi.go:107] duration metric: took 1m35.504424875s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1007 12:00:17.857052 1179332 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-504513 cluster.
	I1007 12:00:17.858947 1179332 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1007 12:00:17.860660 1179332 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1007 12:00:18.097169 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:18.597821 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:19.099496 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:19.603614 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:20.098146 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:20.339647 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:20.598251 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:21.098393 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:21.598548 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:22.102335 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:22.340772 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:22.597775 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:23.098251 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:23.597657 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:24.097449 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:24.597023 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:24.840301 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:25.098058 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:25.598421 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:26.099583 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:26.598798 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:26.840595 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:27.098901 1179332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:00:27.598120 1179332 kapi.go:107] duration metric: took 1m50.005731237s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1007 12:00:27.600300 1179332 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, storage-provisioner-rancher, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1007 12:00:27.602352 1179332 addons.go:510] duration metric: took 1m56.67122167s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner storage-provisioner-rancher metrics-server inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1007 12:00:29.341417 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:31.841016 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:34.339904 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:36.340946 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:38.839846 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:40.840732 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:42.841119 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:44.841498 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:47.340724 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:49.341077 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:51.840437 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:53.840817 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:56.340698 1179332 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"False"
	I1007 12:00:56.840837 1179332 pod_ready.go:93] pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace has status "Ready":"True"
	I1007 12:00:56.840864 1179332 pod_ready.go:82] duration metric: took 1m38.50671883s for pod "metrics-server-84c5f94fbc-zzgph" in "kube-system" namespace to be "Ready" ...
	I1007 12:00:56.840876 1179332 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-zfrr9" in "kube-system" namespace to be "Ready" ...
	I1007 12:00:56.846132 1179332 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-zfrr9" in "kube-system" namespace has status "Ready":"True"
	I1007 12:00:56.846155 1179332 pod_ready.go:82] duration metric: took 5.270992ms for pod "nvidia-device-plugin-daemonset-zfrr9" in "kube-system" namespace to be "Ready" ...
	I1007 12:00:56.846177 1179332 pod_ready.go:39] duration metric: took 1m40.516457222s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:00:56.846193 1179332 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:00:56.846228 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 12:00:56.846290 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 12:00:56.907066 1179332 cri.go:89] found id: "2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea"
	I1007 12:00:56.907098 1179332 cri.go:89] found id: ""
	I1007 12:00:56.907107 1179332 logs.go:282] 1 containers: [2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea]
	I1007 12:00:56.907164 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:00:56.910957 1179332 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 12:00:56.911028 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 12:00:56.949186 1179332 cri.go:89] found id: "ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43"
	I1007 12:00:56.949208 1179332 cri.go:89] found id: ""
	I1007 12:00:56.949216 1179332 logs.go:282] 1 containers: [ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43]
	I1007 12:00:56.949275 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:00:56.952774 1179332 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 12:00:56.952859 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 12:00:56.994569 1179332 cri.go:89] found id: "c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc"
	I1007 12:00:56.994592 1179332 cri.go:89] found id: ""
	I1007 12:00:56.994600 1179332 logs.go:282] 1 containers: [c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc]
	I1007 12:00:56.994656 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:00:56.998061 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 12:00:56.998141 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 12:00:57.044125 1179332 cri.go:89] found id: "cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8"
	I1007 12:00:57.044147 1179332 cri.go:89] found id: ""
	I1007 12:00:57.044154 1179332 logs.go:282] 1 containers: [cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8]
	I1007 12:00:57.044220 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:00:57.048304 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 12:00:57.048431 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 12:00:57.098294 1179332 cri.go:89] found id: "fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916"
	I1007 12:00:57.098324 1179332 cri.go:89] found id: ""
	I1007 12:00:57.098333 1179332 logs.go:282] 1 containers: [fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916]
	I1007 12:00:57.098395 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:00:57.102286 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 12:00:57.102375 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 12:00:57.144417 1179332 cri.go:89] found id: "09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a"
	I1007 12:00:57.144450 1179332 cri.go:89] found id: ""
	I1007 12:00:57.144459 1179332 logs.go:282] 1 containers: [09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a]
	I1007 12:00:57.144560 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:00:57.148407 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 12:00:57.148507 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 12:00:57.187777 1179332 cri.go:89] found id: "82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4"
	I1007 12:00:57.187801 1179332 cri.go:89] found id: ""
	I1007 12:00:57.187810 1179332 logs.go:282] 1 containers: [82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4]
	I1007 12:00:57.187867 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:00:57.191240 1179332 logs.go:123] Gathering logs for kubelet ...
	I1007 12:00:57.191266 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 12:00:57.261592 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: W1007 11:58:36.789481    1488 reflector.go:561] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-504513' and this object
	W1007 12:00:57.261930 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: E1007 11:58:36.789546    1488 reflector.go:158] "Unhandled Error" err="object-\"gadget\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gadget\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:00:57.262104 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: W1007 11:58:36.789481    1488 reflector.go:561] object-"gadget"/"gadget": failed to list *v1.ConfigMap: configmaps "gadget" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-504513' and this object
	W1007 12:00:57.262310 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: E1007 11:58:36.789593    1488 reflector.go:158] "Unhandled Error" err="object-\"gadget\"/\"gadget\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"gadget\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gadget\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:00:57.268166 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.078980    1488 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-504513' and this object
	W1007 12:00:57.268413 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079028    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:00:57.268577 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.079441    1488 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-504513" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-504513' and this object
	W1007 12:00:57.268787 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079474    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:00:57.269539 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.099084    1488 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-504513' and this object
	W1007 12:00:57.269763 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.099134    1488 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	I1007 12:00:57.306965 1179332 logs.go:123] Gathering logs for kube-apiserver [2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea] ...
	I1007 12:00:57.306998 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea"
	I1007 12:00:57.364251 1179332 logs.go:123] Gathering logs for kube-scheduler [cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8] ...
	I1007 12:00:57.364304 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8"
	I1007 12:00:57.411705 1179332 logs.go:123] Gathering logs for kube-controller-manager [09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a] ...
	I1007 12:00:57.411736 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a"
	I1007 12:00:57.489405 1179332 logs.go:123] Gathering logs for container status ...
	I1007 12:00:57.489448 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 12:00:57.544945 1179332 logs.go:123] Gathering logs for dmesg ...
	I1007 12:00:57.544986 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 12:00:57.563506 1179332 logs.go:123] Gathering logs for describe nodes ...
	I1007 12:00:57.563535 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 12:00:57.766411 1179332 logs.go:123] Gathering logs for etcd [ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43] ...
	I1007 12:00:57.766440 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43"
	I1007 12:00:57.817311 1179332 logs.go:123] Gathering logs for coredns [c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc] ...
	I1007 12:00:57.817350 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc"
	I1007 12:00:57.865138 1179332 logs.go:123] Gathering logs for kube-proxy [fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916] ...
	I1007 12:00:57.865171 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916"
	I1007 12:00:57.905184 1179332 logs.go:123] Gathering logs for kindnet [82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4] ...
	I1007 12:00:57.905214 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4"
	I1007 12:00:57.952799 1179332 logs.go:123] Gathering logs for CRI-O ...
	I1007 12:00:57.952830 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 12:00:58.045323 1179332 out.go:358] Setting ErrFile to fd 2...
	I1007 12:00:58.045356 1179332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 12:00:58.045434 1179332 out.go:270] X Problems detected in kubelet:
	W1007 12:00:58.045448 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079028    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:00:58.045457 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.079441    1488 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-504513" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-504513' and this object
	W1007 12:00:58.045487 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079474    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:00:58.045496 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.099084    1488 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-504513' and this object
	W1007 12:00:58.045513 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.099134    1488 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	I1007 12:00:58.045519 1179332 out.go:358] Setting ErrFile to fd 2...
	I1007 12:00:58.045526 1179332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:01:08.046085 1179332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:01:08.062328 1179332 api_server.go:72] duration metric: took 2m37.131510279s to wait for apiserver process to appear ...
	I1007 12:01:08.062355 1179332 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:01:08.062391 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 12:01:08.062454 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 12:01:08.102422 1179332 cri.go:89] found id: "2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea"
	I1007 12:01:08.102448 1179332 cri.go:89] found id: ""
	I1007 12:01:08.102456 1179332 logs.go:282] 1 containers: [2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea]
	I1007 12:01:08.102523 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:08.106349 1179332 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 12:01:08.106425 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 12:01:08.150691 1179332 cri.go:89] found id: "ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43"
	I1007 12:01:08.150716 1179332 cri.go:89] found id: ""
	I1007 12:01:08.150725 1179332 logs.go:282] 1 containers: [ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43]
	I1007 12:01:08.150792 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:08.154462 1179332 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 12:01:08.154543 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 12:01:08.195341 1179332 cri.go:89] found id: "c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc"
	I1007 12:01:08.195365 1179332 cri.go:89] found id: ""
	I1007 12:01:08.195373 1179332 logs.go:282] 1 containers: [c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc]
	I1007 12:01:08.195431 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:08.198978 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 12:01:08.199063 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 12:01:08.241625 1179332 cri.go:89] found id: "cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8"
	I1007 12:01:08.241649 1179332 cri.go:89] found id: ""
	I1007 12:01:08.241657 1179332 logs.go:282] 1 containers: [cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8]
	I1007 12:01:08.241716 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:08.245407 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 12:01:08.245480 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 12:01:08.284240 1179332 cri.go:89] found id: "fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916"
	I1007 12:01:08.284310 1179332 cri.go:89] found id: ""
	I1007 12:01:08.284318 1179332 logs.go:282] 1 containers: [fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916]
	I1007 12:01:08.284382 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:08.287827 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 12:01:08.287923 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 12:01:08.332475 1179332 cri.go:89] found id: "09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a"
	I1007 12:01:08.332500 1179332 cri.go:89] found id: ""
	I1007 12:01:08.332508 1179332 logs.go:282] 1 containers: [09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a]
	I1007 12:01:08.332566 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:08.336647 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 12:01:08.336722 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 12:01:08.384557 1179332 cri.go:89] found id: "82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4"
	I1007 12:01:08.384580 1179332 cri.go:89] found id: ""
	I1007 12:01:08.384588 1179332 logs.go:282] 1 containers: [82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4]
	I1007 12:01:08.384647 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:08.388151 1179332 logs.go:123] Gathering logs for dmesg ...
	I1007 12:01:08.388178 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 12:01:08.404446 1179332 logs.go:123] Gathering logs for kube-apiserver [2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea] ...
	I1007 12:01:08.404477 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea"
	I1007 12:01:08.468061 1179332 logs.go:123] Gathering logs for coredns [c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc] ...
	I1007 12:01:08.468094 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc"
	I1007 12:01:08.511810 1179332 logs.go:123] Gathering logs for kube-proxy [fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916] ...
	I1007 12:01:08.511842 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916"
	I1007 12:01:08.551880 1179332 logs.go:123] Gathering logs for kube-controller-manager [09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a] ...
	I1007 12:01:08.551907 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a"
	I1007 12:01:08.625208 1179332 logs.go:123] Gathering logs for kindnet [82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4] ...
	I1007 12:01:08.625244 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4"
	I1007 12:01:08.671264 1179332 logs.go:123] Gathering logs for CRI-O ...
	I1007 12:01:08.671295 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 12:01:08.765237 1179332 logs.go:123] Gathering logs for container status ...
	I1007 12:01:08.765274 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 12:01:08.821720 1179332 logs.go:123] Gathering logs for kubelet ...
	I1007 12:01:08.821760 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 12:01:08.885555 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: W1007 11:58:36.789481    1488 reflector.go:561] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-504513' and this object
	W1007 12:01:08.885866 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: E1007 11:58:36.789546    1488 reflector.go:158] "Unhandled Error" err="object-\"gadget\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gadget\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:08.886059 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: W1007 11:58:36.789481    1488 reflector.go:561] object-"gadget"/"gadget": failed to list *v1.ConfigMap: configmaps "gadget" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-504513' and this object
	W1007 12:01:08.886294 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: E1007 11:58:36.789593    1488 reflector.go:158] "Unhandled Error" err="object-\"gadget\"/\"gadget\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"gadget\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gadget\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:08.891885 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.078980    1488 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-504513' and this object
	W1007 12:01:08.892163 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079028    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:08.892369 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.079441    1488 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-504513" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-504513' and this object
	W1007 12:01:08.892601 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079474    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:08.893340 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.099084    1488 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-504513' and this object
	W1007 12:01:08.893589 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.099134    1488 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	I1007 12:01:08.946059 1179332 logs.go:123] Gathering logs for describe nodes ...
	I1007 12:01:08.946115 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 12:01:09.111731 1179332 logs.go:123] Gathering logs for etcd [ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43] ...
	I1007 12:01:09.111764 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43"
	I1007 12:01:09.166562 1179332 logs.go:123] Gathering logs for kube-scheduler [cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8] ...
	I1007 12:01:09.166600 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8"
	I1007 12:01:09.213476 1179332 out.go:358] Setting ErrFile to fd 2...
	I1007 12:01:09.213504 1179332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 12:01:09.213577 1179332 out.go:270] X Problems detected in kubelet:
	W1007 12:01:09.213595 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079028    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:09.213601 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.079441    1488 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-504513" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-504513' and this object
	W1007 12:01:09.213760 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079474    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:09.213769 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.099084    1488 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-504513' and this object
	W1007 12:01:09.213778 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.099134    1488 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	I1007 12:01:09.213785 1179332 out.go:358] Setting ErrFile to fd 2...
	I1007 12:01:09.213798 1179332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:01:19.214945 1179332 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:01:19.222767 1179332 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1007 12:01:19.223808 1179332 api_server.go:141] control plane version: v1.31.1
	I1007 12:01:19.223835 1179332 api_server.go:131] duration metric: took 11.161471885s to wait for apiserver health ...
	I1007 12:01:19.223844 1179332 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:01:19.223865 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 12:01:19.223930 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 12:01:19.264580 1179332 cri.go:89] found id: "2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea"
	I1007 12:01:19.264648 1179332 cri.go:89] found id: ""
	I1007 12:01:19.264672 1179332 logs.go:282] 1 containers: [2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea]
	I1007 12:01:19.264742 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:19.268092 1179332 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 12:01:19.268162 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 12:01:19.307042 1179332 cri.go:89] found id: "ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43"
	I1007 12:01:19.307071 1179332 cri.go:89] found id: ""
	I1007 12:01:19.307081 1179332 logs.go:282] 1 containers: [ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43]
	I1007 12:01:19.307149 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:19.310907 1179332 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 12:01:19.310985 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 12:01:19.350991 1179332 cri.go:89] found id: "c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc"
	I1007 12:01:19.351013 1179332 cri.go:89] found id: ""
	I1007 12:01:19.351021 1179332 logs.go:282] 1 containers: [c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc]
	I1007 12:01:19.351081 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:19.354652 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 12:01:19.354726 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 12:01:19.391597 1179332 cri.go:89] found id: "cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8"
	I1007 12:01:19.391673 1179332 cri.go:89] found id: ""
	I1007 12:01:19.391695 1179332 logs.go:282] 1 containers: [cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8]
	I1007 12:01:19.391772 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:19.395203 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 12:01:19.395265 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 12:01:19.433148 1179332 cri.go:89] found id: "fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916"
	I1007 12:01:19.433176 1179332 cri.go:89] found id: ""
	I1007 12:01:19.433185 1179332 logs.go:282] 1 containers: [fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916]
	I1007 12:01:19.433273 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:19.437060 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 12:01:19.437162 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 12:01:19.481229 1179332 cri.go:89] found id: "09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a"
	I1007 12:01:19.481260 1179332 cri.go:89] found id: ""
	I1007 12:01:19.481269 1179332 logs.go:282] 1 containers: [09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a]
	I1007 12:01:19.481346 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:19.485249 1179332 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 12:01:19.485371 1179332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 12:01:19.525804 1179332 cri.go:89] found id: "82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4"
	I1007 12:01:19.525827 1179332 cri.go:89] found id: ""
	I1007 12:01:19.525836 1179332 logs.go:282] 1 containers: [82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4]
	I1007 12:01:19.525894 1179332 ssh_runner.go:195] Run: which crictl
	I1007 12:01:19.529508 1179332 logs.go:123] Gathering logs for kubelet ...
	I1007 12:01:19.529534 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 12:01:19.588137 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: W1007 11:58:36.789481    1488 reflector.go:561] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-504513' and this object
	W1007 12:01:19.588403 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: E1007 11:58:36.789546    1488 reflector.go:158] "Unhandled Error" err="object-\"gadget\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gadget\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:19.588572 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: W1007 11:58:36.789481    1488 reflector.go:561] object-"gadget"/"gadget": failed to list *v1.ConfigMap: configmaps "gadget" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-504513' and this object
	W1007 12:01:19.588780 1179332 logs.go:138] Found kubelet problem: Oct 07 11:58:36 addons-504513 kubelet[1488]: E1007 11:58:36.789593    1488 reflector.go:158] "Unhandled Error" err="object-\"gadget\"/\"gadget\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"gadget\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gadget\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:19.594272 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.078980    1488 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-504513' and this object
	W1007 12:01:19.594492 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079028    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:19.594658 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.079441    1488 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-504513" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-504513' and this object
	W1007 12:01:19.594870 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079474    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:19.595582 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.099084    1488 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-504513' and this object
	W1007 12:01:19.595810 1179332 logs.go:138] Found kubelet problem: Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.099134    1488 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	I1007 12:01:19.634068 1179332 logs.go:123] Gathering logs for kube-apiserver [2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea] ...
	I1007 12:01:19.634099 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea"
	I1007 12:01:19.688673 1179332 logs.go:123] Gathering logs for kube-scheduler [cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8] ...
	I1007 12:01:19.688712 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8"
	I1007 12:01:19.740630 1179332 logs.go:123] Gathering logs for kindnet [82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4] ...
	I1007 12:01:19.740670 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4"
	I1007 12:01:19.781961 1179332 logs.go:123] Gathering logs for CRI-O ...
	I1007 12:01:19.781999 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 12:01:19.875352 1179332 logs.go:123] Gathering logs for container status ...
	I1007 12:01:19.875395 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 12:01:19.928974 1179332 logs.go:123] Gathering logs for dmesg ...
	I1007 12:01:19.929061 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 12:01:19.948652 1179332 logs.go:123] Gathering logs for describe nodes ...
	I1007 12:01:19.948684 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 12:01:20.091457 1179332 logs.go:123] Gathering logs for etcd [ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43] ...
	I1007 12:01:20.091505 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43"
	I1007 12:01:20.138011 1179332 logs.go:123] Gathering logs for coredns [c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc] ...
	I1007 12:01:20.138042 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc"
	I1007 12:01:20.194946 1179332 logs.go:123] Gathering logs for kube-proxy [fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916] ...
	I1007 12:01:20.194982 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916"
	I1007 12:01:20.237494 1179332 logs.go:123] Gathering logs for kube-controller-manager [09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a] ...
	I1007 12:01:20.237526 1179332 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a"
	I1007 12:01:20.311415 1179332 out.go:358] Setting ErrFile to fd 2...
	I1007 12:01:20.311445 1179332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 12:01:20.311505 1179332 out.go:270] X Problems detected in kubelet:
	W1007 12:01:20.311520 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079028    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:20.311529 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.079441    1488 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-504513" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-504513' and this object
	W1007 12:01:20.311544 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.079474    1488 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	W1007 12:01:20.311551 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: W1007 11:59:16.099084    1488 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-504513" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-504513' and this object
	W1007 12:01:20.311558 1179332 out.go:270]   Oct 07 11:59:16 addons-504513 kubelet[1488]: E1007 11:59:16.099134    1488 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-504513\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-504513' and this object" logger="UnhandledError"
	I1007 12:01:20.311571 1179332 out.go:358] Setting ErrFile to fd 2...
	I1007 12:01:20.311577 1179332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:01:30.324091 1179332 system_pods.go:59] 18 kube-system pods found
	I1007 12:01:30.324129 1179332 system_pods.go:61] "coredns-7c65d6cfc9-g27sx" [5afe3dbe-0baa-43f6-ad8f-5390d1d0ae08] Running
	I1007 12:01:30.324137 1179332 system_pods.go:61] "csi-hostpath-attacher-0" [b11f0a35-8e10-4fe5-85bf-566d75b11483] Running
	I1007 12:01:30.324142 1179332 system_pods.go:61] "csi-hostpath-resizer-0" [07b093a3-8c0f-4e12-a54b-1fdbf5c0baad] Running
	I1007 12:01:30.324147 1179332 system_pods.go:61] "csi-hostpathplugin-pwkd9" [01e688b9-41ec-4a0d-bafe-5c808db8abae] Running
	I1007 12:01:30.324151 1179332 system_pods.go:61] "etcd-addons-504513" [4c452ee8-38c8-48c1-8e67-076ea6a91a1d] Running
	I1007 12:01:30.324156 1179332 system_pods.go:61] "kindnet-mg82f" [c5a2e036-ec86-4a4f-9367-5a435dbc6aae] Running
	I1007 12:01:30.324160 1179332 system_pods.go:61] "kube-apiserver-addons-504513" [4e671cad-3b64-40e2-af15-fa2bc3fa1163] Running
	I1007 12:01:30.324165 1179332 system_pods.go:61] "kube-controller-manager-addons-504513" [9d6bbb18-052d-4221-8b09-f8cda0278a8a] Running
	I1007 12:01:30.324169 1179332 system_pods.go:61] "kube-ingress-dns-minikube" [af553361-9217-4f39-9943-971471f491a9] Running
	I1007 12:01:30.324174 1179332 system_pods.go:61] "kube-proxy-j4dwf" [7fe779f0-fb2d-41bd-bdb2-992cd98ca14c] Running
	I1007 12:01:30.324178 1179332 system_pods.go:61] "kube-scheduler-addons-504513" [469da8da-0f7a-4471-9aa5-5f9983d57e88] Running
	I1007 12:01:30.324183 1179332 system_pods.go:61] "metrics-server-84c5f94fbc-zzgph" [daa11124-8d8b-41b4-8005-50023acf5391] Running
	I1007 12:01:30.324194 1179332 system_pods.go:61] "nvidia-device-plugin-daemonset-zfrr9" [c8079eb2-5614-417f-b0b4-df99129833bd] Running
	I1007 12:01:30.324198 1179332 system_pods.go:61] "registry-66c9cd494c-fb9ws" [b8858fa3-9d16-4d5e-ba15-1cb90ece82b4] Running
	I1007 12:01:30.324203 1179332 system_pods.go:61] "registry-proxy-j7gr2" [2a98cc91-7c93-4911-ac0f-e807e5996a10] Running
	I1007 12:01:30.324207 1179332 system_pods.go:61] "snapshot-controller-56fcc65765-klwff" [41d64bb2-6bad-487c-9674-178e8ad3e59f] Running
	I1007 12:01:30.324213 1179332 system_pods.go:61] "snapshot-controller-56fcc65765-xlccl" [7f67806b-c9c0-45a1-aa15-8515c20f3073] Running
	I1007 12:01:30.324218 1179332 system_pods.go:61] "storage-provisioner" [942e5d23-1e6b-4fa6-a249-26972b7daa1d] Running
	I1007 12:01:30.324227 1179332 system_pods.go:74] duration metric: took 11.100375842s to wait for pod list to return data ...
	I1007 12:01:30.324238 1179332 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:01:30.326984 1179332 default_sa.go:45] found service account: "default"
	I1007 12:01:30.327015 1179332 default_sa.go:55] duration metric: took 2.771412ms for default service account to be created ...
	I1007 12:01:30.327026 1179332 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:01:30.337757 1179332 system_pods.go:86] 18 kube-system pods found
	I1007 12:01:30.337802 1179332 system_pods.go:89] "coredns-7c65d6cfc9-g27sx" [5afe3dbe-0baa-43f6-ad8f-5390d1d0ae08] Running
	I1007 12:01:30.337811 1179332 system_pods.go:89] "csi-hostpath-attacher-0" [b11f0a35-8e10-4fe5-85bf-566d75b11483] Running
	I1007 12:01:30.337817 1179332 system_pods.go:89] "csi-hostpath-resizer-0" [07b093a3-8c0f-4e12-a54b-1fdbf5c0baad] Running
	I1007 12:01:30.337823 1179332 system_pods.go:89] "csi-hostpathplugin-pwkd9" [01e688b9-41ec-4a0d-bafe-5c808db8abae] Running
	I1007 12:01:30.337828 1179332 system_pods.go:89] "etcd-addons-504513" [4c452ee8-38c8-48c1-8e67-076ea6a91a1d] Running
	I1007 12:01:30.337835 1179332 system_pods.go:89] "kindnet-mg82f" [c5a2e036-ec86-4a4f-9367-5a435dbc6aae] Running
	I1007 12:01:30.337841 1179332 system_pods.go:89] "kube-apiserver-addons-504513" [4e671cad-3b64-40e2-af15-fa2bc3fa1163] Running
	I1007 12:01:30.337846 1179332 system_pods.go:89] "kube-controller-manager-addons-504513" [9d6bbb18-052d-4221-8b09-f8cda0278a8a] Running
	I1007 12:01:30.337858 1179332 system_pods.go:89] "kube-ingress-dns-minikube" [af553361-9217-4f39-9943-971471f491a9] Running
	I1007 12:01:30.337862 1179332 system_pods.go:89] "kube-proxy-j4dwf" [7fe779f0-fb2d-41bd-bdb2-992cd98ca14c] Running
	I1007 12:01:30.337868 1179332 system_pods.go:89] "kube-scheduler-addons-504513" [469da8da-0f7a-4471-9aa5-5f9983d57e88] Running
	I1007 12:01:30.337877 1179332 system_pods.go:89] "metrics-server-84c5f94fbc-zzgph" [daa11124-8d8b-41b4-8005-50023acf5391] Running
	I1007 12:01:30.337882 1179332 system_pods.go:89] "nvidia-device-plugin-daemonset-zfrr9" [c8079eb2-5614-417f-b0b4-df99129833bd] Running
	I1007 12:01:30.337885 1179332 system_pods.go:89] "registry-66c9cd494c-fb9ws" [b8858fa3-9d16-4d5e-ba15-1cb90ece82b4] Running
	I1007 12:01:30.337891 1179332 system_pods.go:89] "registry-proxy-j7gr2" [2a98cc91-7c93-4911-ac0f-e807e5996a10] Running
	I1007 12:01:30.337898 1179332 system_pods.go:89] "snapshot-controller-56fcc65765-klwff" [41d64bb2-6bad-487c-9674-178e8ad3e59f] Running
	I1007 12:01:30.337902 1179332 system_pods.go:89] "snapshot-controller-56fcc65765-xlccl" [7f67806b-c9c0-45a1-aa15-8515c20f3073] Running
	I1007 12:01:30.337907 1179332 system_pods.go:89] "storage-provisioner" [942e5d23-1e6b-4fa6-a249-26972b7daa1d] Running
	I1007 12:01:30.337919 1179332 system_pods.go:126] duration metric: took 10.887374ms to wait for k8s-apps to be running ...
	I1007 12:01:30.337928 1179332 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:01:30.337992 1179332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:01:30.351956 1179332 system_svc.go:56] duration metric: took 14.001514ms WaitForService to wait for kubelet
	I1007 12:01:30.351990 1179332 kubeadm.go:582] duration metric: took 2m59.421176533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:01:30.352012 1179332 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:01:30.355352 1179332 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 12:01:30.355395 1179332 node_conditions.go:123] node cpu capacity is 2
	I1007 12:01:30.355408 1179332 node_conditions.go:105] duration metric: took 3.38068ms to run NodePressure ...
	I1007 12:01:30.355422 1179332 start.go:241] waiting for startup goroutines ...
	I1007 12:01:30.355429 1179332 start.go:246] waiting for cluster config update ...
	I1007 12:01:30.355446 1179332 start.go:255] writing updated cluster config ...
	I1007 12:01:30.355748 1179332 ssh_runner.go:195] Run: rm -f paused
	I1007 12:01:30.420515 1179332 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 12:01:30.424304 1179332 out.go:177] * Done! kubectl is now configured to use "addons-504513" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 12:14:26 addons-504513 crio[962]: time="2024-10-07 12:14:26.694845451Z" level=info msg="Stopping pod sandbox: adb8362985ce022a407338630f31e2135ebb78131e883aca8af22bae8771903c" id=4c4db5a9-27ea-4c04-a648-3ff97a5ce841 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 12:14:26 addons-504513 crio[962]: time="2024-10-07 12:14:26.694881865Z" level=info msg="Stopped pod sandbox (already stopped): adb8362985ce022a407338630f31e2135ebb78131e883aca8af22bae8771903c" id=4c4db5a9-27ea-4c04-a648-3ff97a5ce841 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 12:14:26 addons-504513 crio[962]: time="2024-10-07 12:14:26.695180415Z" level=info msg="Removing pod sandbox: adb8362985ce022a407338630f31e2135ebb78131e883aca8af22bae8771903c" id=7de4f123-d1b6-4366-a392-9aacfba8a133 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 07 12:14:26 addons-504513 crio[962]: time="2024-10-07 12:14:26.702899628Z" level=info msg="Removed pod sandbox: adb8362985ce022a407338630f31e2135ebb78131e883aca8af22bae8771903c" id=7de4f123-d1b6-4366-a392-9aacfba8a133 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 07 12:14:26 addons-504513 crio[962]: time="2024-10-07 12:14:26.703471005Z" level=info msg="Stopping pod sandbox: e9511346954d021db9937e521c7d262d06be1a3c251d3981db5f1cb86c811b6a" id=2eb7aa72-504a-4c78-8fd4-f86b7bda5f6a name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 12:14:26 addons-504513 crio[962]: time="2024-10-07 12:14:26.703508978Z" level=info msg="Stopped pod sandbox (already stopped): e9511346954d021db9937e521c7d262d06be1a3c251d3981db5f1cb86c811b6a" id=2eb7aa72-504a-4c78-8fd4-f86b7bda5f6a name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 12:14:26 addons-504513 crio[962]: time="2024-10-07 12:14:26.703833850Z" level=info msg="Removing pod sandbox: e9511346954d021db9937e521c7d262d06be1a3c251d3981db5f1cb86c811b6a" id=9b9a1be0-8772-4f52-a815-b229c068416c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 07 12:14:26 addons-504513 crio[962]: time="2024-10-07 12:14:26.718290465Z" level=info msg="Removed pod sandbox: e9511346954d021db9937e521c7d262d06be1a3c251d3981db5f1cb86c811b6a" id=9b9a1be0-8772-4f52-a815-b229c068416c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 07 12:14:26 addons-504513 crio[962]: time="2024-10-07 12:14:26.718848525Z" level=info msg="Stopping pod sandbox: 1932bb9824ec8c1fd996cbd5c72cac5d182a5f510c0da236d62521e700a8e5e6" id=18162bed-1008-47c7-846a-b8a19a474527 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 12:14:26 addons-504513 crio[962]: time="2024-10-07 12:14:26.719005242Z" level=info msg="Stopped pod sandbox (already stopped): 1932bb9824ec8c1fd996cbd5c72cac5d182a5f510c0da236d62521e700a8e5e6" id=18162bed-1008-47c7-846a-b8a19a474527 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 12:14:26 addons-504513 crio[962]: time="2024-10-07 12:14:26.719438905Z" level=info msg="Removing pod sandbox: 1932bb9824ec8c1fd996cbd5c72cac5d182a5f510c0da236d62521e700a8e5e6" id=bb3b39f9-9d59-4325-9386-b793aac24435 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 07 12:14:26 addons-504513 crio[962]: time="2024-10-07 12:14:26.728732910Z" level=info msg="Removed pod sandbox: 1932bb9824ec8c1fd996cbd5c72cac5d182a5f510c0da236d62521e700a8e5e6" id=bb3b39f9-9d59-4325-9386-b793aac24435 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 07 12:14:40 addons-504513 crio[962]: time="2024-10-07 12:14:40.387638276Z" level=info msg="Stopping container: 6da6a055d39712331f8df5f37729649e3d78297b597a9c7a7c79d23bd46e203b (timeout: 30s)" id=2a7955d6-68f6-41ef-a681-d5ea9af9eb23 name=/runtime.v1.RuntimeService/StopContainer
	Oct 07 12:14:40 addons-504513 conmon[4735]: conmon 6da6a055d39712331f8d <ninfo>: container 4746 exited with status 2
	Oct 07 12:14:40 addons-504513 crio[962]: time="2024-10-07 12:14:40.527641084Z" level=info msg="Stopped container 6da6a055d39712331f8df5f37729649e3d78297b597a9c7a7c79d23bd46e203b: default/cloud-spanner-emulator-5b584cc74-vr46n/cloud-spanner-emulator" id=2a7955d6-68f6-41ef-a681-d5ea9af9eb23 name=/runtime.v1.RuntimeService/StopContainer
	Oct 07 12:14:40 addons-504513 crio[962]: time="2024-10-07 12:14:40.528135965Z" level=info msg="Stopping pod sandbox: 1272452becbe2c50ddde4f747cd04730850af92580eb6d03e2dd0bd4aa176e54" id=f397c646-844f-4fe0-adb9-8c30f8f13397 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 12:14:40 addons-504513 crio[962]: time="2024-10-07 12:14:40.528408562Z" level=info msg="Got pod network &{Name:cloud-spanner-emulator-5b584cc74-vr46n Namespace:default ID:1272452becbe2c50ddde4f747cd04730850af92580eb6d03e2dd0bd4aa176e54 UID:28145496-aeb6-4e85-a1ef-5f328a2a7473 NetNS:/var/run/netns/a3b7b779-4592-4fe9-9d7e-f5ee61eca679 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 07 12:14:40 addons-504513 crio[962]: time="2024-10-07 12:14:40.528559905Z" level=info msg="Deleting pod default_cloud-spanner-emulator-5b584cc74-vr46n from CNI network \"kindnet\" (type=ptp)"
	Oct 07 12:14:40 addons-504513 crio[962]: time="2024-10-07 12:14:40.556745238Z" level=info msg="Stopped pod sandbox: 1272452becbe2c50ddde4f747cd04730850af92580eb6d03e2dd0bd4aa176e54" id=f397c646-844f-4fe0-adb9-8c30f8f13397 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 12:14:40 addons-504513 crio[962]: time="2024-10-07 12:14:40.716100934Z" level=info msg="Removing container: 6da6a055d39712331f8df5f37729649e3d78297b597a9c7a7c79d23bd46e203b" id=bae51765-a9c3-4046-ab81-e3b29d6779da name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 07 12:14:40 addons-504513 crio[962]: time="2024-10-07 12:14:40.739249261Z" level=info msg="Removed container 6da6a055d39712331f8df5f37729649e3d78297b597a9c7a7c79d23bd46e203b: default/cloud-spanner-emulator-5b584cc74-vr46n/cloud-spanner-emulator" id=bae51765-a9c3-4046-ab81-e3b29d6779da name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 07 12:15:26 addons-504513 crio[962]: time="2024-10-07 12:15:26.732283454Z" level=info msg="Stopping pod sandbox: 1272452becbe2c50ddde4f747cd04730850af92580eb6d03e2dd0bd4aa176e54" id=7bcaba92-7114-4854-994b-9642e1942f76 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 12:15:26 addons-504513 crio[962]: time="2024-10-07 12:15:26.732328779Z" level=info msg="Stopped pod sandbox (already stopped): 1272452becbe2c50ddde4f747cd04730850af92580eb6d03e2dd0bd4aa176e54" id=7bcaba92-7114-4854-994b-9642e1942f76 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 12:15:26 addons-504513 crio[962]: time="2024-10-07 12:15:26.732922835Z" level=info msg="Removing pod sandbox: 1272452becbe2c50ddde4f747cd04730850af92580eb6d03e2dd0bd4aa176e54" id=251e37d5-8f59-4c77-bbc0-25fdc1309412 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 07 12:15:26 addons-504513 crio[962]: time="2024-10-07 12:15:26.741254573Z" level=info msg="Removed pod sandbox: 1272452becbe2c50ddde4f747cd04730850af92580eb6d03e2dd0bd4aa176e54" id=251e37d5-8f59-4c77-bbc0-25fdc1309412 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	da28ac1204dde       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   92d20e2711954       hello-world-app-55bf9c44b4-6kpzh
	4595bc9d59c71       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     3 minutes ago       Running             busybox                   0                   06d9e375d2d09       busybox
	20b2e23c95e1b       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         4 minutes ago       Running             nginx                     0                   789bcbc8c471f       nginx
	368c814bc16fc       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   15 minutes ago      Running             metrics-server            0                   b702d9dba195b       metrics-server-84c5f94fbc-zzgph
	5a5d902eb7092       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        16 minutes ago      Running             storage-provisioner       0                   155e335a997d7       storage-provisioner
	c60017af89967       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                        16 minutes ago      Running             coredns                   0                   c07f24bd8fa65       coredns-7c65d6cfc9-g27sx
	fd40e0c547214       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                        17 minutes ago      Running             kube-proxy                0                   a2234f27ea43b       kube-proxy-j4dwf
	82e9dcb708dff       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                        17 minutes ago      Running             kindnet-cni               0                   75a020e3a4985       kindnet-mg82f
	2f1eb19abef58       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                        17 minutes ago      Running             kube-apiserver            0                   9b8dd3b909ac4       kube-apiserver-addons-504513
	09fd038c50124       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                        17 minutes ago      Running             kube-controller-manager   0                   6f49b3f0d3ef2       kube-controller-manager-addons-504513
	cafddae5dc35a       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                        17 minutes ago      Running             kube-scheduler            0                   0165c7b27ab2a       kube-scheduler-addons-504513
	ea9071e39cce0       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        17 minutes ago      Running             etcd                      0                   881d912aca59e       etcd-addons-504513
	
	
	==> coredns [c60017af899678cfdacdc2d469f258ef1930ffde3464d3d1f2e4a40dbeaec9cc] <==
	[INFO] 10.244.0.19:58866 - 24118 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000171428s
	[INFO] 10.244.0.19:44104 - 37321 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002429762s
	[INFO] 10.244.0.19:58866 - 38543 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001877018s
	[INFO] 10.244.0.19:44104 - 56961 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00220029s
	[INFO] 10.244.0.19:44104 - 31145 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00013298s
	[INFO] 10.244.0.19:58866 - 5863 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001727973s
	[INFO] 10.244.0.19:58866 - 51550 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075134s
	[INFO] 10.244.0.19:37894 - 12706 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000106166s
	[INFO] 10.244.0.19:36948 - 10914 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000034781s
	[INFO] 10.244.0.19:36948 - 9274 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000128459s
	[INFO] 10.244.0.19:36948 - 24941 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000101571s
	[INFO] 10.244.0.19:37894 - 31692 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000043307s
	[INFO] 10.244.0.19:36948 - 36419 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000117448s
	[INFO] 10.244.0.19:37894 - 21638 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003593s
	[INFO] 10.244.0.19:36948 - 16979 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000109923s
	[INFO] 10.244.0.19:37894 - 20020 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000143712s
	[INFO] 10.244.0.19:37894 - 5114 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056427s
	[INFO] 10.244.0.19:36948 - 60624 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038154s
	[INFO] 10.244.0.19:37894 - 56172 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030506s
	[INFO] 10.244.0.19:36948 - 55706 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001879135s
	[INFO] 10.244.0.19:37894 - 44695 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003822884s
	[INFO] 10.244.0.19:37894 - 46402 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00091124s
	[INFO] 10.244.0.19:36948 - 46201 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001889088s
	[INFO] 10.244.0.19:36948 - 22741 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000051618s
	[INFO] 10.244.0.19:37894 - 7804 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000048435s
	
	
	==> describe nodes <==
	Name:               addons-504513
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-504513
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=addons-504513
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T11_58_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-504513
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 11:58:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-504513
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:15:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:14:04 +0000   Mon, 07 Oct 2024 11:58:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:14:04 +0000   Mon, 07 Oct 2024 11:58:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:14:04 +0000   Mon, 07 Oct 2024 11:58:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:14:04 +0000   Mon, 07 Oct 2024 11:59:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    addons-504513
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 d9f2535c52294194a698e057647c458a
	  System UUID:                ce552362-e2a4-4a6f-95fb-4dd9841bc164
	  Boot ID:                    9a8fefe6-3962-4cb9-809a-2b740ac8992f
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-6kpzh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 coredns-7c65d6cfc9-g27sx                 100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-addons-504513                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-mg82f                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-addons-504513             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-addons-504513    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-j4dwf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-addons-504513             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 metrics-server-84c5f94fbc-zzgph          100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         17m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 17m   kube-proxy       
	  Normal   Starting                 17m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  17m   kubelet          Node addons-504513 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m   kubelet          Node addons-504513 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m   kubelet          Node addons-504513 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m   node-controller  Node addons-504513 event: Registered Node addons-504513 in Controller
	  Normal   NodeReady                16m   kubelet          Node addons-504513 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 7 11:30] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [ea9071e39cce072dc9f4a6c823255e2c75d3f36db5b1b11b96fcd4cac0eeeb43] <==
	{"level":"info","ts":"2024-10-07T11:58:21.232377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2024-10-07T11:58:21.232384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-10-07T11:58:21.232394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2024-10-07T11:58:21.232402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-10-07T11:58:21.240316Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:58:21.244443Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:addons-504513 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-07T11:58:21.244645Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:58:21.244714Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:58:21.244736Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:58:21.244750Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T11:58:21.244974Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T11:58:21.245621Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T11:58:21.246473Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2024-10-07T11:58:21.247051Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T11:58:21.247863Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-07T11:58:21.263456Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-07T11:58:21.263492Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-07T11:58:33.056985Z","caller":"traceutil/trace.go:171","msg":"trace[708153501] transaction","detail":"{read_only:false; response_revision:320; number_of_response:1; }","duration":"111.67362ms","start":"2024-10-07T11:58:32.945295Z","end":"2024-10-07T11:58:33.056969Z","steps":["trace[708153501] 'process raft request'  (duration: 111.348585ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T11:58:33.416827Z","caller":"traceutil/trace.go:171","msg":"trace[499780291] transaction","detail":"{read_only:false; response_revision:324; number_of_response:1; }","duration":"107.545974ms","start":"2024-10-07T11:58:33.309265Z","end":"2024-10-07T11:58:33.416811Z","steps":["trace[499780291] 'process raft request'  (duration: 107.418443ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T12:08:21.499271Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1479}
	{"level":"info","ts":"2024-10-07T12:08:21.529619Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1479,"took":"29.91558ms","hash":2429785176,"current-db-size-bytes":6074368,"current-db-size":"6.1 MB","current-db-size-in-use-bytes":3039232,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2024-10-07T12:08:21.529675Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2429785176,"revision":1479,"compact-revision":-1}
	{"level":"info","ts":"2024-10-07T12:13:21.505096Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1895}
	{"level":"info","ts":"2024-10-07T12:13:21.522438Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1895,"took":"16.644212ms","hash":1115011858,"current-db-size-bytes":6074368,"current-db-size":"6.1 MB","current-db-size-in-use-bytes":4554752,"current-db-size-in-use":"4.6 MB"}
	{"level":"info","ts":"2024-10-07T12:13:21.522494Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1115011858,"revision":1895,"compact-revision":1479}
	
	
	==> kernel <==
	 12:15:46 up  7:58,  0 users,  load average: 0.21, 0.30, 0.93
	Linux addons-504513 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [82e9dcb708dffce1f2e5f2e55ba278ac2f20f616be1420c29d22fa5aba234fc4] <==
	I1007 12:13:45.470085       1 main.go:299] handling current node
	I1007 12:13:55.469411       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:13:55.469444       1 main.go:299] handling current node
	I1007 12:14:05.469513       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:14:05.469554       1 main.go:299] handling current node
	I1007 12:14:15.469824       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:14:15.469861       1 main.go:299] handling current node
	I1007 12:14:25.471718       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:14:25.471751       1 main.go:299] handling current node
	I1007 12:14:35.469296       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:14:35.469326       1 main.go:299] handling current node
	I1007 12:14:45.469827       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:14:45.469863       1 main.go:299] handling current node
	I1007 12:14:55.474579       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:14:55.474715       1 main.go:299] handling current node
	I1007 12:15:05.469163       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:15:05.469200       1 main.go:299] handling current node
	I1007 12:15:15.469822       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:15:15.469855       1 main.go:299] handling current node
	I1007 12:15:25.476737       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:15:25.476768       1 main.go:299] handling current node
	I1007 12:15:35.468832       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:15:35.468865       1 main.go:299] handling current node
	I1007 12:15:45.469014       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:15:45.469128       1 main.go:299] handling current node
	
	
	==> kube-apiserver [2f1eb19abef58eb214952ad15e3e6017d1d128bfcfe48bb9c3d218d2135232ea] <==
	 > logger="UnhandledError"
	E1007 12:00:56.722308       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.76.92:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.76.92:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.76.92:443: connect: connection refused" logger="UnhandledError"
	E1007 12:00:56.725228       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.76.92:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.76.92:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.76.92:443: connect: connection refused" logger="UnhandledError"
	E1007 12:00:56.729046       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.76.92:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.76.92:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.76.92:443: connect: connection refused" logger="UnhandledError"
	I1007 12:00:56.817604       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1007 12:09:43.776650       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.13.90"}
	I1007 12:10:19.332987       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1007 12:10:32.090675       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 12:10:32.090815       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 12:10:32.124001       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 12:10:32.124097       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 12:10:32.143896       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 12:10:32.144013       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 12:10:32.171510       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 12:10:32.172950       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 12:10:32.297298       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 12:10:32.297360       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1007 12:10:33.172617       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1007 12:10:33.297504       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1007 12:10:33.305258       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1007 12:10:45.920707       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1007 12:10:51.479943       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1007 12:10:51.774459       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.216.123"}
	I1007 12:13:12.310959       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.63.126"}
	E1007 12:14:06.547091       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [09fd038c50124672f3001d7262cbd38fbe330026eb890cb5742871845a77536a] <==
	W1007 12:13:52.485006       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:13:52.485049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:14:04.211711       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:14:04.211755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 12:14:04.712944       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-504513"
	W1007 12:14:08.665012       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:14:08.665057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:14:34.994913       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:14:34.994953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:14:35.421385       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:14:35.421538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 12:14:38.802972       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I1007 12:14:40.368120       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-5b584cc74" duration="9.452µs"
	W1007 12:14:43.143208       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:14:43.143250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:14:43.917555       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:14:43.917596       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:15:06.809193       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:15:06.809235       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:15:19.761278       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:15:19.761317       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:15:32.339364       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:15:32.339409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:15:37.132523       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:15:37.132565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [fd40e0c54721445ee3f493f11b135f6acde34b8d74e6e0055a0129108498d916] <==
	I1007 11:58:36.726948       1 server_linux.go:66] "Using iptables proxy"
	I1007 11:58:37.170988       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.58.2"]
	E1007 11:58:37.171199       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 11:58:37.431764       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1007 11:58:37.431897       1 server_linux.go:169] "Using iptables Proxier"
	I1007 11:58:37.433758       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 11:58:37.434159       1 server.go:483] "Version info" version="v1.31.1"
	I1007 11:58:37.434425       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 11:58:37.442472       1 config.go:199] "Starting service config controller"
	I1007 11:58:37.442559       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 11:58:37.442622       1 config.go:105] "Starting endpoint slice config controller"
	I1007 11:58:37.442652       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 11:58:37.451350       1 config.go:328] "Starting node config controller"
	I1007 11:58:37.451485       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 11:58:37.547337       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 11:58:37.547617       1 shared_informer.go:320] Caches are synced for service config
	I1007 11:58:37.552363       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [cafddae5dc35aa98dba0b2d05cb328a44fcc7863943a56a6e8875f44152ceee8] <==
	E1007 11:58:24.659403       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:58:24.659517       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 11:58:24.659573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E1007 11:58:24.659651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:58:24.659843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 11:58:24.659917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:58:24.659951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 11:58:24.660037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:58:24.660460       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 11:58:24.660542       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1007 11:58:24.660943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 11:58:24.661010       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:58:24.661149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 11:58:24.661202       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:58:24.661308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1007 11:58:24.661355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:58:24.661461       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 11:58:24.661507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:58:24.661657       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 11:58:24.661709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:58:24.661821       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 11:58:24.661871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 11:58:24.661963       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 11:58:24.662023       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1007 11:58:26.252435       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 12:14:36 addons-504513 kubelet[1488]: E1007 12:14:36.485396    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303276485141066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:14:36 addons-504513 kubelet[1488]: E1007 12:14:36.485436    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303276485141066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:14:40 addons-504513 kubelet[1488]: I1007 12:14:40.583424    1488 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtw7c\" (UniqueName: \"kubernetes.io/projected/28145496-aeb6-4e85-a1ef-5f328a2a7473-kube-api-access-jtw7c\") pod \"28145496-aeb6-4e85-a1ef-5f328a2a7473\" (UID: \"28145496-aeb6-4e85-a1ef-5f328a2a7473\") "
	Oct 07 12:14:40 addons-504513 kubelet[1488]: I1007 12:14:40.587323    1488 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28145496-aeb6-4e85-a1ef-5f328a2a7473-kube-api-access-jtw7c" (OuterVolumeSpecName: "kube-api-access-jtw7c") pod "28145496-aeb6-4e85-a1ef-5f328a2a7473" (UID: "28145496-aeb6-4e85-a1ef-5f328a2a7473"). InnerVolumeSpecName "kube-api-access-jtw7c". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 07 12:14:40 addons-504513 kubelet[1488]: I1007 12:14:40.684381    1488 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jtw7c\" (UniqueName: \"kubernetes.io/projected/28145496-aeb6-4e85-a1ef-5f328a2a7473-kube-api-access-jtw7c\") on node \"addons-504513\" DevicePath \"\""
	Oct 07 12:14:40 addons-504513 kubelet[1488]: I1007 12:14:40.714308    1488 scope.go:117] "RemoveContainer" containerID="6da6a055d39712331f8df5f37729649e3d78297b597a9c7a7c79d23bd46e203b"
	Oct 07 12:14:40 addons-504513 kubelet[1488]: I1007 12:14:40.739575    1488 scope.go:117] "RemoveContainer" containerID="6da6a055d39712331f8df5f37729649e3d78297b597a9c7a7c79d23bd46e203b"
	Oct 07 12:14:40 addons-504513 kubelet[1488]: E1007 12:14:40.739954    1488 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6da6a055d39712331f8df5f37729649e3d78297b597a9c7a7c79d23bd46e203b\": container with ID starting with 6da6a055d39712331f8df5f37729649e3d78297b597a9c7a7c79d23bd46e203b not found: ID does not exist" containerID="6da6a055d39712331f8df5f37729649e3d78297b597a9c7a7c79d23bd46e203b"
	Oct 07 12:14:40 addons-504513 kubelet[1488]: I1007 12:14:40.739991    1488 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6da6a055d39712331f8df5f37729649e3d78297b597a9c7a7c79d23bd46e203b"} err="failed to get container status \"6da6a055d39712331f8df5f37729649e3d78297b597a9c7a7c79d23bd46e203b\": rpc error: code = NotFound desc = could not find container \"6da6a055d39712331f8df5f37729649e3d78297b597a9c7a7c79d23bd46e203b\": container with ID starting with 6da6a055d39712331f8df5f37729649e3d78297b597a9c7a7c79d23bd46e203b not found: ID does not exist"
	Oct 07 12:14:42 addons-504513 kubelet[1488]: I1007 12:14:42.189052    1488 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28145496-aeb6-4e85-a1ef-5f328a2a7473" path="/var/lib/kubelet/pods/28145496-aeb6-4e85-a1ef-5f328a2a7473/volumes"
	Oct 07 12:14:46 addons-504513 kubelet[1488]: E1007 12:14:46.488295    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303286488066074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:14:46 addons-504513 kubelet[1488]: E1007 12:14:46.488331    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303286488066074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:14:56 addons-504513 kubelet[1488]: E1007 12:14:56.491074    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303296490798251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:14:56 addons-504513 kubelet[1488]: E1007 12:14:56.491114    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303296490798251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:15:06 addons-504513 kubelet[1488]: E1007 12:15:06.494163    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303306493924603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:15:06 addons-504513 kubelet[1488]: E1007 12:15:06.494200    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303306493924603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:15:16 addons-504513 kubelet[1488]: E1007 12:15:16.497085    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303316496796209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:15:16 addons-504513 kubelet[1488]: E1007 12:15:16.497128    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303316496796209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:15:23 addons-504513 kubelet[1488]: I1007 12:15:23.188281    1488 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 12:15:26 addons-504513 kubelet[1488]: E1007 12:15:26.499475    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303326499246702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:15:26 addons-504513 kubelet[1488]: E1007 12:15:26.499510    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303326499246702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:15:36 addons-504513 kubelet[1488]: E1007 12:15:36.502294    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303336502048425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:15:36 addons-504513 kubelet[1488]: E1007 12:15:36.502335    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303336502048425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:15:46 addons-504513 kubelet[1488]: E1007 12:15:46.506234    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303346505829390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:15:46 addons-504513 kubelet[1488]: E1007 12:15:46.506285    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303346505829390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [5a5d902eb70920ddbf3acd681555c221118e7498466da95d2b36224cb168560b] <==
	I1007 11:59:17.143747       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 11:59:17.159464       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 11:59:17.159542       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 11:59:17.168861       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 11:59:17.169114       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-504513_4bd8a0fd-b92b-4d3c-99a1-0b6504c0ad34!
	I1007 11:59:17.169398       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bd91f230-4fa0-49d1-a01e-4a1414f60404", APIVersion:"v1", ResourceVersion:"881", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-504513_4bd8a0fd-b92b-4d3c-99a1-0b6504c0ad34 became leader
	I1007 11:59:17.272609       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-504513_4bd8a0fd-b92b-4d3c-99a1-0b6504c0ad34!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-504513 -n addons-504513
helpers_test.go:261: (dbg) Run:  kubectl --context addons-504513 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-504513 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (347.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (139.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-600773 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1007 12:29:30.726035 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:29:58.428683 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:31:31.242129 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-600773 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m14.680479165s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:591: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-600773       NotReady   control-plane   10m     v1.31.1
	ha-600773-m02   Ready      control-plane   10m     v1.31.1
	ha-600773-m04   Ready      <none>          7m52s   v1.31.1

                                                
                                                
-- /stdout --
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:599: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-600773
helpers_test.go:235: (dbg) docker inspect ha-600773:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "82aa0f339f38d1d3c2254427bd3b1a4bb8da8b165c52c4ff811edb03a807c9f5",
	        "Created": "2024-10-07T12:20:34.790363857Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1244587,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-07T12:29:24.886197254Z",
	            "FinishedAt": "2024-10-07T12:29:24.134019731Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/82aa0f339f38d1d3c2254427bd3b1a4bb8da8b165c52c4ff811edb03a807c9f5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/82aa0f339f38d1d3c2254427bd3b1a4bb8da8b165c52c4ff811edb03a807c9f5/hostname",
	        "HostsPath": "/var/lib/docker/containers/82aa0f339f38d1d3c2254427bd3b1a4bb8da8b165c52c4ff811edb03a807c9f5/hosts",
	        "LogPath": "/var/lib/docker/containers/82aa0f339f38d1d3c2254427bd3b1a4bb8da8b165c52c4ff811edb03a807c9f5/82aa0f339f38d1d3c2254427bd3b1a4bb8da8b165c52c4ff811edb03a807c9f5-json.log",
	        "Name": "/ha-600773",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-600773:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-600773",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0afadec5e415f57d078aaf6bff10fc903982eac2ada02d59b0b3828534b780e4-init/diff:/var/lib/docker/overlay2/679cc8fccbb0902884eb141037cc21fc6e7a2efac609a53e07ea6b92675ef1c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0afadec5e415f57d078aaf6bff10fc903982eac2ada02d59b0b3828534b780e4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0afadec5e415f57d078aaf6bff10fc903982eac2ada02d59b0b3828534b780e4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0afadec5e415f57d078aaf6bff10fc903982eac2ada02d59b0b3828534b780e4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-600773",
	                "Source": "/var/lib/docker/volumes/ha-600773/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-600773",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-600773",
	                "name.minikube.sigs.k8s.io": "ha-600773",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5de7454f4aca32f5ea31da97a92c9b04935b4ee1aab1f6046d4e267d898e76ad",
	            "SandboxKey": "/var/run/docker/netns/5de7454f4aca",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34307"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34308"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34311"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34309"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34310"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-600773": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d2dc7c09db9bd2d7ca30e406ab5306fcd5526bab0a8768db533fddcb6c109c52",
	                    "EndpointID": "3a2cd94265bbe91c84e4e1907f11f31d39750c3b306f82b0afe8ef4390752d05",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-600773",
	                        "82aa0f339f38"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-600773 -n ha-600773
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ha-600773 logs -n 25: (2.286387292s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-600773 cp ha-600773-m03:/home/docker/cp-test.txt                              | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC | 07 Oct 24 12:24 UTC |
	|         | ha-600773-m04:/home/docker/cp-test_ha-600773-m03_ha-600773-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-600773 ssh -n                                                                 | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC | 07 Oct 24 12:24 UTC |
	|         | ha-600773-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-600773 ssh -n ha-600773-m04 sudo cat                                          | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC | 07 Oct 24 12:24 UTC |
	|         | /home/docker/cp-test_ha-600773-m03_ha-600773-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-600773 cp testdata/cp-test.txt                                                | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC | 07 Oct 24 12:24 UTC |
	|         | ha-600773-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-600773 ssh -n                                                                 | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC | 07 Oct 24 12:24 UTC |
	|         | ha-600773-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-600773 cp ha-600773-m04:/home/docker/cp-test.txt                              | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC | 07 Oct 24 12:24 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1049508879/001/cp-test_ha-600773-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-600773 ssh -n                                                                 | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC | 07 Oct 24 12:24 UTC |
	|         | ha-600773-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-600773 cp ha-600773-m04:/home/docker/cp-test.txt                              | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC | 07 Oct 24 12:24 UTC |
	|         | ha-600773:/home/docker/cp-test_ha-600773-m04_ha-600773.txt                       |           |         |         |                     |                     |
	| ssh     | ha-600773 ssh -n                                                                 | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC | 07 Oct 24 12:24 UTC |
	|         | ha-600773-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-600773 ssh -n ha-600773 sudo cat                                              | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC | 07 Oct 24 12:24 UTC |
	|         | /home/docker/cp-test_ha-600773-m04_ha-600773.txt                                 |           |         |         |                     |                     |
	| cp      | ha-600773 cp ha-600773-m04:/home/docker/cp-test.txt                              | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC | 07 Oct 24 12:24 UTC |
	|         | ha-600773-m02:/home/docker/cp-test_ha-600773-m04_ha-600773-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-600773 ssh -n                                                                 | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC | 07 Oct 24 12:24 UTC |
	|         | ha-600773-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-600773 ssh -n ha-600773-m02 sudo cat                                          | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC | 07 Oct 24 12:24 UTC |
	|         | /home/docker/cp-test_ha-600773-m04_ha-600773-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-600773 cp ha-600773-m04:/home/docker/cp-test.txt                              | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC | 07 Oct 24 12:24 UTC |
	|         | ha-600773-m03:/home/docker/cp-test_ha-600773-m04_ha-600773-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-600773 ssh -n                                                                 | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC | 07 Oct 24 12:24 UTC |
	|         | ha-600773-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-600773 ssh -n ha-600773-m03 sudo cat                                          | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC | 07 Oct 24 12:24 UTC |
	|         | /home/docker/cp-test_ha-600773-m04_ha-600773-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-600773 node stop m02 -v=7                                                     | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC | 07 Oct 24 12:24 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-600773 node start m02 -v=7                                                    | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC | 07 Oct 24 12:25 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-600773 -v=7                                                           | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-600773 -v=7                                                                | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:25 UTC | 07 Oct 24 12:25 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-600773 --wait=true -v=7                                                    | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:25 UTC | 07 Oct 24 12:28 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-600773                                                                | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:28 UTC |                     |
	| node    | ha-600773 node delete m03 -v=7                                                   | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:28 UTC | 07 Oct 24 12:28 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-600773 stop -v=7                                                              | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:28 UTC | 07 Oct 24 12:29 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-600773 --wait=true                                                         | ha-600773 | jenkins | v1.34.0 | 07 Oct 24 12:29 UTC | 07 Oct 24 12:31 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:29:24
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:29:24.565694 1244393 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:29:24.565840 1244393 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:29:24.565852 1244393 out.go:358] Setting ErrFile to fd 2...
	I1007 12:29:24.565858 1244393 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:29:24.566108 1244393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1173066/.minikube/bin
	I1007 12:29:24.566478 1244393 out.go:352] Setting JSON to false
	I1007 12:29:24.567396 1244393 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":29509,"bootTime":1728274656,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1007 12:29:24.567472 1244393 start.go:139] virtualization:  
	I1007 12:29:24.570083 1244393 out.go:177] * [ha-600773] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 12:29:24.571789 1244393 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 12:29:24.571858 1244393 notify.go:220] Checking for updates...
	I1007 12:29:24.575250 1244393 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:29:24.576993 1244393 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-1173066/kubeconfig
	I1007 12:29:24.578457 1244393 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1173066/.minikube
	I1007 12:29:24.579960 1244393 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 12:29:24.581568 1244393 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:29:24.584056 1244393 config.go:182] Loaded profile config "ha-600773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:29:24.584649 1244393 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:29:24.614441 1244393 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 12:29:24.614573 1244393 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:29:24.666186 1244393 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:1 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:51 SystemTime:2024-10-07 12:29:24.656798729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:29:24.666307 1244393 docker.go:318] overlay module found
	I1007 12:29:24.669720 1244393 out.go:177] * Using the docker driver based on existing profile
	I1007 12:29:24.671783 1244393 start.go:297] selected driver: docker
	I1007 12:29:24.671801 1244393 start.go:901] validating driver "docker" against &{Name:ha-600773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-600773 Namespace:default APIServerHAVIP:192.168.58.254 APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.58.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logvi
ewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:29:24.671949 1244393 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:29:24.672055 1244393 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:29:24.725657 1244393 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:1 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:51 SystemTime:2024-10-07 12:29:24.715661006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:29:24.726148 1244393 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:29:24.726175 1244393 cni.go:84] Creating CNI manager for ""
	I1007 12:29:24.726214 1244393 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1007 12:29:24.726276 1244393 start.go:340] cluster config:
	{Name:ha-600773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-600773 Namespace:default APIServerHAVIP:192.168.58.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.58.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvi
dia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:29:24.728695 1244393 out.go:177] * Starting "ha-600773" primary control-plane node in "ha-600773" cluster
	I1007 12:29:24.730431 1244393 cache.go:121] Beginning downloading kic base image for docker with crio
	I1007 12:29:24.732342 1244393 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 12:29:24.734430 1244393 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:29:24.734485 1244393 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-1173066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1007 12:29:24.734498 1244393 cache.go:56] Caching tarball of preloaded images
	I1007 12:29:24.734513 1244393 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 12:29:24.734602 1244393 preload.go:172] Found /home/jenkins/minikube-integration/19763-1173066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1007 12:29:24.734613 1244393 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:29:24.734755 1244393 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/config.json ...
	I1007 12:29:24.752003 1244393 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1007 12:29:24.752026 1244393 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1007 12:29:24.752047 1244393 cache.go:194] Successfully downloaded all kic artifacts
	I1007 12:29:24.752069 1244393 start.go:360] acquireMachinesLock for ha-600773: {Name:mkf63ff4d2575824085b6d898d8f01756b0952ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:29:24.752127 1244393 start.go:364] duration metric: took 35.085µs to acquireMachinesLock for "ha-600773"
	I1007 12:29:24.752151 1244393 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:29:24.752161 1244393 fix.go:54] fixHost starting: 
	I1007 12:29:24.752448 1244393 cli_runner.go:164] Run: docker container inspect ha-600773 --format={{.State.Status}}
	I1007 12:29:24.769707 1244393 fix.go:112] recreateIfNeeded on ha-600773: state=Stopped err=<nil>
	W1007 12:29:24.769744 1244393 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:29:24.772159 1244393 out.go:177] * Restarting existing docker container for "ha-600773" ...
	I1007 12:29:24.773885 1244393 cli_runner.go:164] Run: docker start ha-600773
	I1007 12:29:25.079642 1244393 cli_runner.go:164] Run: docker container inspect ha-600773 --format={{.State.Status}}
	I1007 12:29:25.102796 1244393 kic.go:430] container "ha-600773" state is running.
	I1007 12:29:25.103540 1244393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-600773
	I1007 12:29:25.127663 1244393 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/config.json ...
	I1007 12:29:25.129140 1244393 machine.go:93] provisionDockerMachine start ...
	I1007 12:29:25.129216 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773
	I1007 12:29:25.155486 1244393 main.go:141] libmachine: Using SSH client type: native
	I1007 12:29:25.155750 1244393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 34307 <nil> <nil>}
	I1007 12:29:25.155760 1244393 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:29:25.157695 1244393 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1007 12:29:28.295585 1244393 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-600773
	
	I1007 12:29:28.295611 1244393 ubuntu.go:169] provisioning hostname "ha-600773"
	I1007 12:29:28.295673 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773
	I1007 12:29:28.312777 1244393 main.go:141] libmachine: Using SSH client type: native
	I1007 12:29:28.313031 1244393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 34307 <nil> <nil>}
	I1007 12:29:28.313050 1244393 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-600773 && echo "ha-600773" | sudo tee /etc/hostname
	I1007 12:29:28.460355 1244393 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-600773
	
	I1007 12:29:28.460443 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773
	I1007 12:29:28.477488 1244393 main.go:141] libmachine: Using SSH client type: native
	I1007 12:29:28.477755 1244393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 34307 <nil> <nil>}
	I1007 12:29:28.477778 1244393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-600773' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-600773/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-600773' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:29:28.612381 1244393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:29:28.612411 1244393 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19763-1173066/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-1173066/.minikube}
	I1007 12:29:28.612441 1244393 ubuntu.go:177] setting up certificates
	I1007 12:29:28.612456 1244393 provision.go:84] configureAuth start
	I1007 12:29:28.612521 1244393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-600773
	I1007 12:29:28.629295 1244393 provision.go:143] copyHostCerts
	I1007 12:29:28.629342 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.pem
	I1007 12:29:28.629376 1244393 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.pem, removing ...
	I1007 12:29:28.629388 1244393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.pem
	I1007 12:29:28.629472 1244393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.pem (1078 bytes)
	I1007 12:29:28.629578 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-1173066/.minikube/cert.pem
	I1007 12:29:28.629601 1244393 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-1173066/.minikube/cert.pem, removing ...
	I1007 12:29:28.629609 1244393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-1173066/.minikube/cert.pem
	I1007 12:29:28.629640 1244393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-1173066/.minikube/cert.pem (1123 bytes)
	I1007 12:29:28.629698 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-1173066/.minikube/key.pem
	I1007 12:29:28.629728 1244393 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-1173066/.minikube/key.pem, removing ...
	I1007 12:29:28.629736 1244393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-1173066/.minikube/key.pem
	I1007 12:29:28.629765 1244393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-1173066/.minikube/key.pem (1675 bytes)
	I1007 12:29:28.629831 1244393 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca-key.pem org=jenkins.ha-600773 san=[127.0.0.1 192.168.58.2 ha-600773 localhost minikube]
	I1007 12:29:28.914689 1244393 provision.go:177] copyRemoteCerts
	I1007 12:29:28.914774 1244393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:29:28.914817 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773
	I1007 12:29:28.931641 1244393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/ha-600773/id_rsa Username:docker}
	I1007 12:29:29.029151 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:29:29.029209 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1007 12:29:29.053557 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:29:29.053620 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1007 12:29:29.077488 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:29:29.077558 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:29:29.101442 1244393 provision.go:87] duration metric: took 488.966018ms to configureAuth
	I1007 12:29:29.101467 1244393 ubuntu.go:193] setting minikube options for container-runtime
	I1007 12:29:29.101719 1244393 config.go:182] Loaded profile config "ha-600773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:29:29.101826 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773
	I1007 12:29:29.119614 1244393 main.go:141] libmachine: Using SSH client type: native
	I1007 12:29:29.119869 1244393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 34307 <nil> <nil>}
	I1007 12:29:29.119890 1244393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:29:29.517482 1244393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:29:29.517509 1244393 machine.go:96] duration metric: took 4.388348978s to provisionDockerMachine
	I1007 12:29:29.517521 1244393 start.go:293] postStartSetup for "ha-600773" (driver="docker")
	I1007 12:29:29.517533 1244393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:29:29.517603 1244393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:29:29.517655 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773
	I1007 12:29:29.545651 1244393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/ha-600773/id_rsa Username:docker}
	I1007 12:29:29.641194 1244393 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:29:29.644426 1244393 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 12:29:29.644479 1244393 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 12:29:29.644490 1244393 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 12:29:29.644498 1244393 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 12:29:29.644513 1244393 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-1173066/.minikube/addons for local assets ...
	I1007 12:29:29.644576 1244393 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-1173066/.minikube/files for local assets ...
	I1007 12:29:29.644654 1244393 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-1173066/.minikube/files/etc/ssl/certs/11784622.pem -> 11784622.pem in /etc/ssl/certs
	I1007 12:29:29.644665 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/files/etc/ssl/certs/11784622.pem -> /etc/ssl/certs/11784622.pem
	I1007 12:29:29.644765 1244393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:29:29.653358 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/files/etc/ssl/certs/11784622.pem --> /etc/ssl/certs/11784622.pem (1708 bytes)
	I1007 12:29:29.677562 1244393 start.go:296] duration metric: took 160.024863ms for postStartSetup
	I1007 12:29:29.677666 1244393 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 12:29:29.677757 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773
	I1007 12:29:29.694457 1244393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/ha-600773/id_rsa Username:docker}
	I1007 12:29:29.789023 1244393 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 12:29:29.793547 1244393 fix.go:56] duration metric: took 5.04137722s for fixHost
	I1007 12:29:29.793574 1244393 start.go:83] releasing machines lock for "ha-600773", held for 5.041433466s
	I1007 12:29:29.793666 1244393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-600773
	I1007 12:29:29.809453 1244393 ssh_runner.go:195] Run: cat /version.json
	I1007 12:29:29.809503 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773
	I1007 12:29:29.809523 1244393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:29:29.809593 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773
	I1007 12:29:29.830123 1244393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/ha-600773/id_rsa Username:docker}
	I1007 12:29:29.841924 1244393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/ha-600773/id_rsa Username:docker}
	I1007 12:29:29.923776 1244393 ssh_runner.go:195] Run: systemctl --version
	I1007 12:29:30.057792 1244393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:29:30.220930 1244393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 12:29:30.227334 1244393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:29:30.237438 1244393 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1007 12:29:30.237579 1244393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:29:30.246902 1244393 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 12:29:30.246972 1244393 start.go:495] detecting cgroup driver to use...
	I1007 12:29:30.247024 1244393 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 12:29:30.247113 1244393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:29:30.259776 1244393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:29:30.272311 1244393 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:29:30.272397 1244393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:29:30.286195 1244393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:29:30.298362 1244393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:29:30.390573 1244393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:29:30.480353 1244393 docker.go:233] disabling docker service ...
	I1007 12:29:30.480432 1244393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:29:30.492668 1244393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:29:30.505162 1244393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:29:30.584688 1244393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:29:30.670229 1244393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:29:30.682559 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:29:30.698804 1244393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:29:30.698872 1244393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:29:30.708565 1244393 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:29:30.708631 1244393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:29:30.718727 1244393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:29:30.728643 1244393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:29:30.738430 1244393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:29:30.747781 1244393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:29:30.757612 1244393 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:29:30.767935 1244393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:29:30.777667 1244393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:29:30.786188 1244393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:29:30.794524 1244393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:29:30.871719 1244393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:29:30.984619 1244393 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:29:30.984703 1244393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:29:30.988169 1244393 start.go:563] Will wait 60s for crictl version
	I1007 12:29:30.988231 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:29:30.991725 1244393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:29:31.036956 1244393 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1007 12:29:31.037041 1244393 ssh_runner.go:195] Run: crio --version
	I1007 12:29:31.083642 1244393 ssh_runner.go:195] Run: crio --version
	I1007 12:29:31.128555 1244393 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1007 12:29:31.130857 1244393 cli_runner.go:164] Run: docker network inspect ha-600773 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 12:29:31.146858 1244393 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1007 12:29:31.150665 1244393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:29:31.162256 1244393 kubeadm.go:883] updating cluster {Name:ha-600773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-600773 Namespace:default APIServerHAVIP:192.168.58.254 APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.58.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false me
tallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:29:31.162421 1244393 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:29:31.162482 1244393 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:29:31.207413 1244393 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:29:31.207440 1244393 crio.go:433] Images already preloaded, skipping extraction
	I1007 12:29:31.207505 1244393 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:29:31.247114 1244393 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:29:31.247134 1244393 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:29:31.247144 1244393 kubeadm.go:934] updating node { 192.168.58.2 8443 v1.31.1 crio true true} ...
	I1007 12:29:31.247253 1244393 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-600773 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-600773 Namespace:default APIServerHAVIP:192.168.58.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:29:31.247336 1244393 ssh_runner.go:195] Run: crio config
	I1007 12:29:31.302561 1244393 cni.go:84] Creating CNI manager for ""
	I1007 12:29:31.302582 1244393 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1007 12:29:31.302592 1244393 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:29:31.302635 1244393 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-600773 NodeName:ha-600773 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:29:31.302813 1244393 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-600773"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:29:31.302837 1244393 kube-vip.go:115] generating kube-vip config ...
	I1007 12:29:31.302888 1244393 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1007 12:29:31.315452 1244393 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:29:31.315571 1244393 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.58.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:29:31.315639 1244393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:29:31.324228 1244393 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:29:31.324344 1244393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 12:29:31.333160 1244393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1007 12:29:31.350798 1244393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:29:31.368317 1244393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I1007 12:29:31.385814 1244393 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:29:31.403127 1244393 ssh_runner.go:195] Run: grep 192.168.58.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:29:31.406453 1244393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:29:31.416984 1244393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:29:31.498464 1244393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:29:31.512078 1244393 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773 for IP: 192.168.58.2
	I1007 12:29:31.512101 1244393 certs.go:194] generating shared ca certs ...
	I1007 12:29:31.512119 1244393 certs.go:226] acquiring lock for ca certs: {Name:mk2f3e101c3a8a21aa5a00b0d7100cac880b0543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:29:31.512297 1244393 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.key
	I1007 12:29:31.512350 1244393 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.key
	I1007 12:29:31.512360 1244393 certs.go:256] generating profile certs ...
	I1007 12:29:31.512467 1244393 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/client.key
	I1007 12:29:31.512498 1244393 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/apiserver.key.5187be9f
	I1007 12:29:31.512517 1244393 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/apiserver.crt.5187be9f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.58.2 192.168.58.3 192.168.58.254]
	I1007 12:29:31.781302 1244393 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/apiserver.crt.5187be9f ...
	I1007 12:29:31.781333 1244393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/apiserver.crt.5187be9f: {Name:mkfd554c855fec6c93807776bfde69a91e963fa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:29:31.781534 1244393 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/apiserver.key.5187be9f ...
	I1007 12:29:31.781552 1244393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/apiserver.key.5187be9f: {Name:mk58402539210d9f60cdaa8f1cd2f96a90508232 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:29:31.781637 1244393 certs.go:381] copying /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/apiserver.crt.5187be9f -> /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/apiserver.crt
	I1007 12:29:31.781785 1244393 certs.go:385] copying /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/apiserver.key.5187be9f -> /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/apiserver.key
	I1007 12:29:31.781921 1244393 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/proxy-client.key
	I1007 12:29:31.781939 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:29:31.781955 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:29:31.781972 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:29:31.781989 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:29:31.782008 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:29:31.782028 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:29:31.782042 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:29:31.782056 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:29:31.782108 1244393 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/1178462.pem (1338 bytes)
	W1007 12:29:31.782144 1244393 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/1178462_empty.pem, impossibly tiny 0 bytes
	I1007 12:29:31.782158 1244393 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 12:29:31.782184 1244393 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem (1078 bytes)
	I1007 12:29:31.782213 1244393 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:29:31.782239 1244393 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/key.pem (1675 bytes)
	I1007 12:29:31.782287 1244393 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/files/etc/ssl/certs/11784622.pem (1708 bytes)
	I1007 12:29:31.782318 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:29:31.782385 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/1178462.pem -> /usr/share/ca-certificates/1178462.pem
	I1007 12:29:31.782403 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/files/etc/ssl/certs/11784622.pem -> /usr/share/ca-certificates/11784622.pem
	I1007 12:29:31.782991 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:29:31.808085 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:29:31.838059 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:29:31.862988 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:29:31.886628 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1007 12:29:31.910828 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:29:31.935347 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:29:31.959650 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 12:29:31.984589 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:29:32.013049 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/1178462.pem --> /usr/share/ca-certificates/1178462.pem (1338 bytes)
	I1007 12:29:32.038051 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/files/etc/ssl/certs/11784622.pem --> /usr/share/ca-certificates/11784622.pem (1708 bytes)
	I1007 12:29:32.063020 1244393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:29:32.081447 1244393 ssh_runner.go:195] Run: openssl version
	I1007 12:29:32.087283 1244393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:29:32.096953 1244393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:29:32.100502 1244393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:58 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:29:32.100577 1244393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:29:32.107562 1244393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:29:32.117087 1244393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1178462.pem && ln -fs /usr/share/ca-certificates/1178462.pem /etc/ssl/certs/1178462.pem"
	I1007 12:29:32.126770 1244393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1178462.pem
	I1007 12:29:32.130342 1244393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:16 /usr/share/ca-certificates/1178462.pem
	I1007 12:29:32.130408 1244393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1178462.pem
	I1007 12:29:32.137478 1244393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1178462.pem /etc/ssl/certs/51391683.0"
	I1007 12:29:32.146865 1244393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11784622.pem && ln -fs /usr/share/ca-certificates/11784622.pem /etc/ssl/certs/11784622.pem"
	I1007 12:29:32.156721 1244393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11784622.pem
	I1007 12:29:32.160696 1244393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:16 /usr/share/ca-certificates/11784622.pem
	I1007 12:29:32.160777 1244393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11784622.pem
	I1007 12:29:32.167868 1244393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11784622.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:29:32.177061 1244393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:29:32.180456 1244393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 12:29:32.187062 1244393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 12:29:32.194062 1244393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 12:29:32.200944 1244393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 12:29:32.208592 1244393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 12:29:32.215292 1244393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 12:29:32.221985 1244393 kubeadm.go:392] StartCluster: {Name:ha-600773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-600773 Namespace:default APIServerHAVIP:192.168.58.254 APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.58.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metal
lb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:29:32.222143 1244393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:29:32.222203 1244393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:29:32.260517 1244393 cri.go:89] found id: ""
	I1007 12:29:32.260593 1244393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 12:29:32.269439 1244393 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 12:29:32.269462 1244393 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 12:29:32.269515 1244393 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 12:29:32.278109 1244393 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 12:29:32.278561 1244393 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-600773" does not appear in /home/jenkins/minikube-integration/19763-1173066/kubeconfig
	I1007 12:29:32.278669 1244393 kubeconfig.go:62] /home/jenkins/minikube-integration/19763-1173066/kubeconfig needs updating (will repair): [kubeconfig missing "ha-600773" cluster setting kubeconfig missing "ha-600773" context setting]
	I1007 12:29:32.278948 1244393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/kubeconfig: {Name:mkfc1e9493ee5c91f2837c31acce39f4935ee46c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:29:32.279395 1244393 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-1173066/kubeconfig
	I1007 12:29:32.279654 1244393 kapi.go:59] client config for ha-600773: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/client.key", CAFile:"/home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e94a20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 12:29:32.280113 1244393 cert_rotation.go:140] Starting client certificate rotation controller
	I1007 12:29:32.280618 1244393 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 12:29:32.289510 1244393 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.58.2
	I1007 12:29:32.289536 1244393 kubeadm.go:597] duration metric: took 20.067328ms to restartPrimaryControlPlane
	I1007 12:29:32.289546 1244393 kubeadm.go:394] duration metric: took 67.570882ms to StartCluster
	I1007 12:29:32.289574 1244393 settings.go:142] acquiring lock: {Name:mk942b9f169f258985b7aaeeac5d38deaf461542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:29:32.289650 1244393 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19763-1173066/kubeconfig
	I1007 12:29:32.290242 1244393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1173066/kubeconfig: {Name:mkfc1e9493ee5c91f2837c31acce39f4935ee46c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:29:32.290706 1244393 config.go:182] Loaded profile config "ha-600773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:29:32.290478 1244393 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:29:32.290769 1244393 start.go:241] waiting for startup goroutines ...
	I1007 12:29:32.290782 1244393 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 12:29:32.296224 1244393 out.go:177] * Enabled addons: 
	I1007 12:29:32.298493 1244393 addons.go:510] duration metric: took 7.710303ms for enable addons: enabled=[]
	I1007 12:29:32.298533 1244393 start.go:246] waiting for cluster config update ...
	I1007 12:29:32.298546 1244393 start.go:255] writing updated cluster config ...
	I1007 12:29:32.301152 1244393 out.go:201] 
	I1007 12:29:32.303693 1244393 config.go:182] Loaded profile config "ha-600773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:29:32.303803 1244393 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/config.json ...
	I1007 12:29:32.306439 1244393 out.go:177] * Starting "ha-600773-m02" control-plane node in "ha-600773" cluster
	I1007 12:29:32.309075 1244393 cache.go:121] Beginning downloading kic base image for docker with crio
	I1007 12:29:32.311259 1244393 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 12:29:32.313482 1244393 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:29:32.313506 1244393 cache.go:56] Caching tarball of preloaded images
	I1007 12:29:32.313577 1244393 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 12:29:32.313636 1244393 preload.go:172] Found /home/jenkins/minikube-integration/19763-1173066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1007 12:29:32.313647 1244393 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:29:32.313772 1244393 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/config.json ...
	I1007 12:29:32.331514 1244393 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1007 12:29:32.331538 1244393 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1007 12:29:32.331557 1244393 cache.go:194] Successfully downloaded all kic artifacts
	I1007 12:29:32.331582 1244393 start.go:360] acquireMachinesLock for ha-600773-m02: {Name:mk2fb212ce12ee713e40e149950ef2cf7d8ce054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:29:32.331642 1244393 start.go:364] duration metric: took 37.686µs to acquireMachinesLock for "ha-600773-m02"
	I1007 12:29:32.331666 1244393 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:29:32.331678 1244393 fix.go:54] fixHost starting: m02
	I1007 12:29:32.331950 1244393 cli_runner.go:164] Run: docker container inspect ha-600773-m02 --format={{.State.Status}}
	I1007 12:29:32.348201 1244393 fix.go:112] recreateIfNeeded on ha-600773-m02: state=Stopped err=<nil>
	W1007 12:29:32.348230 1244393 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:29:32.350784 1244393 out.go:177] * Restarting existing docker container for "ha-600773-m02" ...
	I1007 12:29:32.353222 1244393 cli_runner.go:164] Run: docker start ha-600773-m02
	I1007 12:29:32.628853 1244393 cli_runner.go:164] Run: docker container inspect ha-600773-m02 --format={{.State.Status}}
	I1007 12:29:32.653656 1244393 kic.go:430] container "ha-600773-m02" state is running.
	I1007 12:29:32.654010 1244393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-600773-m02
	I1007 12:29:32.676537 1244393 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/config.json ...
	I1007 12:29:32.676786 1244393 machine.go:93] provisionDockerMachine start ...
	I1007 12:29:32.676844 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773-m02
	I1007 12:29:32.695238 1244393 main.go:141] libmachine: Using SSH client type: native
	I1007 12:29:32.695477 1244393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 34312 <nil> <nil>}
	I1007 12:29:32.695486 1244393 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:29:32.696376 1244393 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1007 12:29:35.890819 1244393 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-600773-m02
	
	I1007 12:29:35.890846 1244393 ubuntu.go:169] provisioning hostname "ha-600773-m02"
	I1007 12:29:35.890949 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773-m02
	I1007 12:29:35.925397 1244393 main.go:141] libmachine: Using SSH client type: native
	I1007 12:29:35.925642 1244393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 34312 <nil> <nil>}
	I1007 12:29:35.925661 1244393 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-600773-m02 && echo "ha-600773-m02" | sudo tee /etc/hostname
	I1007 12:29:36.150197 1244393 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-600773-m02
	
	I1007 12:29:36.150293 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773-m02
	I1007 12:29:36.190513 1244393 main.go:141] libmachine: Using SSH client type: native
	I1007 12:29:36.190775 1244393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 34312 <nil> <nil>}
	I1007 12:29:36.190798 1244393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-600773-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-600773-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-600773-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:29:36.365778 1244393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:29:36.365806 1244393 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19763-1173066/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-1173066/.minikube}
	I1007 12:29:36.365824 1244393 ubuntu.go:177] setting up certificates
	I1007 12:29:36.365843 1244393 provision.go:84] configureAuth start
	I1007 12:29:36.365922 1244393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-600773-m02
	I1007 12:29:36.413886 1244393 provision.go:143] copyHostCerts
	I1007 12:29:36.413934 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.pem
	I1007 12:29:36.413968 1244393 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.pem, removing ...
	I1007 12:29:36.413979 1244393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.pem
	I1007 12:29:36.414057 1244393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.pem (1078 bytes)
	I1007 12:29:36.414155 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-1173066/.minikube/cert.pem
	I1007 12:29:36.414176 1244393 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-1173066/.minikube/cert.pem, removing ...
	I1007 12:29:36.414185 1244393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-1173066/.minikube/cert.pem
	I1007 12:29:36.414216 1244393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-1173066/.minikube/cert.pem (1123 bytes)
	I1007 12:29:36.414273 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-1173066/.minikube/key.pem
	I1007 12:29:36.414298 1244393 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-1173066/.minikube/key.pem, removing ...
	I1007 12:29:36.414306 1244393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-1173066/.minikube/key.pem
	I1007 12:29:36.414333 1244393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-1173066/.minikube/key.pem (1675 bytes)
	I1007 12:29:36.414405 1244393 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca-key.pem org=jenkins.ha-600773-m02 san=[127.0.0.1 192.168.58.3 ha-600773-m02 localhost minikube]
	I1007 12:29:36.826393 1244393 provision.go:177] copyRemoteCerts
	I1007 12:29:36.826472 1244393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:29:36.826554 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773-m02
	I1007 12:29:36.845127 1244393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34312 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/ha-600773-m02/id_rsa Username:docker}
	I1007 12:29:36.970450 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:29:36.970510 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1007 12:29:37.045656 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:29:37.045816 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:29:37.090287 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:29:37.090398 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:29:37.126588 1244393 provision.go:87] duration metric: took 760.724642ms to configureAuth
	I1007 12:29:37.126628 1244393 ubuntu.go:193] setting minikube options for container-runtime
	I1007 12:29:37.126879 1244393 config.go:182] Loaded profile config "ha-600773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:29:37.127019 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773-m02
	I1007 12:29:37.146922 1244393 main.go:141] libmachine: Using SSH client type: native
	I1007 12:29:37.147174 1244393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 34312 <nil> <nil>}
	I1007 12:29:37.147194 1244393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:29:37.561325 1244393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:29:37.561351 1244393 machine.go:96] duration metric: took 4.884555003s to provisionDockerMachine
	I1007 12:29:37.561364 1244393 start.go:293] postStartSetup for "ha-600773-m02" (driver="docker")
	I1007 12:29:37.561376 1244393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:29:37.561449 1244393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:29:37.561494 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773-m02
	I1007 12:29:37.587560 1244393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34312 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/ha-600773-m02/id_rsa Username:docker}
	I1007 12:29:37.732141 1244393 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:29:37.748695 1244393 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 12:29:37.748740 1244393 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 12:29:37.748752 1244393 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 12:29:37.748759 1244393 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 12:29:37.748771 1244393 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-1173066/.minikube/addons for local assets ...
	I1007 12:29:37.748832 1244393 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-1173066/.minikube/files for local assets ...
	I1007 12:29:37.748912 1244393 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-1173066/.minikube/files/etc/ssl/certs/11784622.pem -> 11784622.pem in /etc/ssl/certs
	I1007 12:29:37.748925 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/files/etc/ssl/certs/11784622.pem -> /etc/ssl/certs/11784622.pem
	I1007 12:29:37.749026 1244393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:29:37.782781 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/files/etc/ssl/certs/11784622.pem --> /etc/ssl/certs/11784622.pem (1708 bytes)
	I1007 12:29:37.842501 1244393 start.go:296] duration metric: took 281.120416ms for postStartSetup
	I1007 12:29:37.842589 1244393 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 12:29:37.842629 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773-m02
	I1007 12:29:37.868051 1244393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34312 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/ha-600773-m02/id_rsa Username:docker}
	I1007 12:29:37.996493 1244393 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 12:29:38.013856 1244393 fix.go:56] duration metric: took 5.682166899s for fixHost
	I1007 12:29:38.013888 1244393 start.go:83] releasing machines lock for "ha-600773-m02", held for 5.682232458s
	I1007 12:29:38.013981 1244393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-600773-m02
	I1007 12:29:38.049257 1244393 out.go:177] * Found network options:
	I1007 12:29:38.051906 1244393 out.go:177]   - NO_PROXY=192.168.58.2
	W1007 12:29:38.054041 1244393 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:29:38.054088 1244393 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:29:38.054162 1244393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:29:38.054220 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773-m02
	I1007 12:29:38.054502 1244393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:29:38.054558 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773-m02
	I1007 12:29:38.102656 1244393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34312 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/ha-600773-m02/id_rsa Username:docker}
	I1007 12:29:38.105257 1244393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34312 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/ha-600773-m02/id_rsa Username:docker}
	I1007 12:29:38.564160 1244393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 12:29:38.593953 1244393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:29:38.644017 1244393 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1007 12:29:38.644097 1244393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:29:38.698900 1244393 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 12:29:38.698971 1244393 start.go:495] detecting cgroup driver to use...
	I1007 12:29:38.699016 1244393 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 12:29:38.699093 1244393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:29:38.750659 1244393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:29:38.794572 1244393 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:29:38.794681 1244393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:29:38.851295 1244393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:29:38.893696 1244393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:29:39.218122 1244393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:29:39.485538 1244393 docker.go:233] disabling docker service ...
	I1007 12:29:39.485627 1244393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:29:39.530449 1244393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:29:39.579913 1244393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:29:39.874808 1244393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:29:40.202660 1244393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:29:40.270011 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:29:40.362143 1244393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:29:40.362265 1244393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:29:40.395008 1244393 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:29:40.395129 1244393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:29:40.419260 1244393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:29:40.440606 1244393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:29:40.475219 1244393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:29:40.502449 1244393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:29:40.533882 1244393 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:29:40.574241 1244393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:29:40.620766 1244393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:29:40.666468 1244393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:29:40.694353 1244393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:29:40.974925 1244393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:29:42.421980 1244393 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.446980523s)
	I1007 12:29:42.422004 1244393 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:29:42.422066 1244393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:29:42.428780 1244393 start.go:563] Will wait 60s for crictl version
	I1007 12:29:42.428845 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:29:42.434478 1244393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:29:42.519234 1244393 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1007 12:29:42.519316 1244393 ssh_runner.go:195] Run: crio --version
	I1007 12:29:42.595611 1244393 ssh_runner.go:195] Run: crio --version
	I1007 12:29:42.744837 1244393 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1007 12:29:42.747373 1244393 out.go:177]   - env NO_PROXY=192.168.58.2
	I1007 12:29:42.750038 1244393 cli_runner.go:164] Run: docker network inspect ha-600773 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 12:29:42.793962 1244393 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1007 12:29:42.816760 1244393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:29:42.855573 1244393 mustload.go:65] Loading cluster: ha-600773
	I1007 12:29:42.855811 1244393 config.go:182] Loaded profile config "ha-600773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:29:42.856083 1244393 cli_runner.go:164] Run: docker container inspect ha-600773 --format={{.State.Status}}
	I1007 12:29:42.886145 1244393 host.go:66] Checking if "ha-600773" exists ...
	I1007 12:29:42.886417 1244393 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773 for IP: 192.168.58.3
	I1007 12:29:42.886432 1244393 certs.go:194] generating shared ca certs ...
	I1007 12:29:42.886446 1244393 certs.go:226] acquiring lock for ca certs: {Name:mk2f3e101c3a8a21aa5a00b0d7100cac880b0543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:29:42.886562 1244393 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.key
	I1007 12:29:42.886606 1244393 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.key
	I1007 12:29:42.886617 1244393 certs.go:256] generating profile certs ...
	I1007 12:29:42.886690 1244393 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/client.key
	I1007 12:29:42.886758 1244393 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/apiserver.key.48a70623
	I1007 12:29:42.886803 1244393 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/proxy-client.key
	I1007 12:29:42.886816 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:29:42.886829 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:29:42.886846 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:29:42.886857 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:29:42.886868 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:29:42.886880 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:29:42.886896 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:29:42.886912 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:29:42.886958 1244393 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/1178462.pem (1338 bytes)
	W1007 12:29:42.886993 1244393 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/1178462_empty.pem, impossibly tiny 0 bytes
	I1007 12:29:42.887006 1244393 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 12:29:42.887030 1244393 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem (1078 bytes)
	I1007 12:29:42.887057 1244393 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:29:42.887084 1244393 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/key.pem (1675 bytes)
	I1007 12:29:42.887132 1244393 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/files/etc/ssl/certs/11784622.pem (1708 bytes)
	I1007 12:29:42.887164 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/1178462.pem -> /usr/share/ca-certificates/1178462.pem
	I1007 12:29:42.887181 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/files/etc/ssl/certs/11784622.pem -> /usr/share/ca-certificates/11784622.pem
	I1007 12:29:42.887194 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:29:42.887252 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773
	I1007 12:29:42.917956 1244393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/ha-600773/id_rsa Username:docker}
	I1007 12:29:43.032536 1244393 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:29:43.047078 1244393 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:29:43.074155 1244393 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:29:43.083006 1244393 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 12:29:43.111264 1244393 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:29:43.122721 1244393 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:29:43.146041 1244393 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:29:43.158302 1244393 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:29:43.179912 1244393 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:29:43.193690 1244393 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:29:43.222054 1244393 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:29:43.235332 1244393 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:29:43.269324 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:29:43.310525 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:29:43.353568 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:29:43.384894 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:29:43.409935 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1007 12:29:43.439164 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:29:43.466634 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:29:43.501882 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 12:29:43.554503 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/1178462.pem --> /usr/share/ca-certificates/1178462.pem (1338 bytes)
	I1007 12:29:43.597478 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/files/etc/ssl/certs/11784622.pem --> /usr/share/ca-certificates/11784622.pem (1708 bytes)
	I1007 12:29:43.634479 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:29:43.683011 1244393 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:29:43.721071 1244393 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 12:29:43.750128 1244393 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:29:43.780780 1244393 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:29:43.819125 1244393 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:29:43.847248 1244393 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:29:43.882733 1244393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:29:43.910499 1244393 ssh_runner.go:195] Run: openssl version
	I1007 12:29:43.918616 1244393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1178462.pem && ln -fs /usr/share/ca-certificates/1178462.pem /etc/ssl/certs/1178462.pem"
	I1007 12:29:43.935505 1244393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1178462.pem
	I1007 12:29:43.939061 1244393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:16 /usr/share/ca-certificates/1178462.pem
	I1007 12:29:43.939134 1244393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1178462.pem
	I1007 12:29:43.953134 1244393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1178462.pem /etc/ssl/certs/51391683.0"
	I1007 12:29:43.965754 1244393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11784622.pem && ln -fs /usr/share/ca-certificates/11784622.pem /etc/ssl/certs/11784622.pem"
	I1007 12:29:43.980469 1244393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11784622.pem
	I1007 12:29:43.985665 1244393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:16 /usr/share/ca-certificates/11784622.pem
	I1007 12:29:43.985746 1244393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11784622.pem
	I1007 12:29:43.994466 1244393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11784622.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:29:44.005788 1244393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:29:44.022058 1244393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:29:44.026579 1244393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:58 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:29:44.026660 1244393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:29:44.036274 1244393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:29:44.047328 1244393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:29:44.051638 1244393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 12:29:44.063181 1244393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 12:29:44.074340 1244393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 12:29:44.082003 1244393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 12:29:44.094237 1244393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 12:29:44.105140 1244393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 12:29:44.113475 1244393 kubeadm.go:934] updating node {m02 192.168.58.3 8443 v1.31.1 crio true true} ...
	I1007 12:29:44.113591 1244393 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-600773-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-600773 Namespace:default APIServerHAVIP:192.168.58.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:29:44.113628 1244393 kube-vip.go:115] generating kube-vip config ...
	I1007 12:29:44.113680 1244393 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1007 12:29:44.129723 1244393 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:29:44.129800 1244393 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.58.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:29:44.129880 1244393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:29:44.140209 1244393 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:29:44.140317 1244393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:29:44.149587 1244393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1007 12:29:44.177391 1244393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:29:44.202850 1244393 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:29:44.230896 1244393 ssh_runner.go:195] Run: grep 192.168.58.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:29:44.234692 1244393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:29:44.246709 1244393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:29:44.432867 1244393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:29:44.453359 1244393 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:29:44.453918 1244393 config.go:182] Loaded profile config "ha-600773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:29:44.458068 1244393 out.go:177] * Verifying Kubernetes components...
	I1007 12:29:44.460463 1244393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:29:44.653995 1244393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:29:44.674305 1244393 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-1173066/kubeconfig
	I1007 12:29:44.674620 1244393 kapi.go:59] client config for ha-600773: &rest.Config{Host:"https://192.168.58.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/client.key", CAFile:"/home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e94a20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:29:44.674692 1244393 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.58.254:8443 with https://192.168.58.2:8443
	I1007 12:29:44.674957 1244393 node_ready.go:35] waiting up to 6m0s for node "ha-600773-m02" to be "Ready" ...
	I1007 12:29:44.675060 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m02
	I1007 12:29:44.675068 1244393 round_trippers.go:469] Request Headers:
	I1007 12:29:44.675077 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:29:44.675081 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:29:58.185693 1244393 round_trippers.go:574] Response Status: 500 Internal Server Error in 13510 milliseconds
	I1007 12:29:58.186123 1244393 node_ready.go:53] error getting node "ha-600773-m02": etcdserver: request timed out
	I1007 12:29:58.186184 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m02
	I1007 12:29:58.186189 1244393 round_trippers.go:469] Request Headers:
	I1007 12:29:58.186197 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:29:58.186203 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:02.476924 1244393 round_trippers.go:574] Response Status: 200 OK in 4290 milliseconds
	I1007 12:30:02.480283 1244393 node_ready.go:49] node "ha-600773-m02" has status "Ready":"True"
	I1007 12:30:02.480307 1244393 node_ready.go:38] duration metric: took 17.805323477s for node "ha-600773-m02" to be "Ready" ...
	I1007 12:30:02.480317 1244393 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:30:02.480360 1244393 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 12:30:02.480372 1244393 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 12:30:02.480442 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1007 12:30:02.480449 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:02.480457 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:02.480461 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:02.570774 1244393 round_trippers.go:574] Response Status: 429 Too Many Requests in 90 milliseconds
	I1007 12:30:03.575169 1244393 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1007 12:30:03.575217 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1007 12:30:03.575223 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:03.575232 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:03.575236 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:03.659847 1244393 round_trippers.go:574] Response Status: 200 OK in 84 milliseconds
	I1007 12:30:03.687088 1244393 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-blfnw" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:03.687267 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:30:03.687295 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:03.687328 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:03.687347 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:03.706653 1244393 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I1007 12:30:03.711490 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:30:03.711558 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:03.711583 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:03.711604 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:03.724562 1244393 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1007 12:30:03.725567 1244393 pod_ready.go:93] pod "coredns-7c65d6cfc9-blfnw" in "kube-system" namespace has status "Ready":"True"
	I1007 12:30:03.725624 1244393 pod_ready.go:82] duration metric: took 38.454238ms for pod "coredns-7c65d6cfc9-blfnw" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:03.725651 1244393 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jrczl" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:03.725749 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jrczl
	I1007 12:30:03.725783 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:03.725805 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:03.725829 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:03.751894 1244393 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I1007 12:30:03.753026 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:30:03.753048 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:03.753058 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:03.753062 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:03.780479 1244393 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1007 12:30:03.781114 1244393 pod_ready.go:93] pod "coredns-7c65d6cfc9-jrczl" in "kube-system" namespace has status "Ready":"True"
	I1007 12:30:03.781164 1244393 pod_ready.go:82] duration metric: took 55.492735ms for pod "coredns-7c65d6cfc9-jrczl" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:03.781192 1244393 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-600773" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:03.781289 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-600773
	I1007 12:30:03.781322 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:03.781344 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:03.781364 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:03.792090 1244393 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1007 12:30:03.792806 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:30:03.792828 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:03.792836 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:03.792841 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:03.818113 1244393 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I1007 12:30:03.819034 1244393 pod_ready.go:93] pod "etcd-ha-600773" in "kube-system" namespace has status "Ready":"True"
	I1007 12:30:03.819089 1244393 pod_ready.go:82] duration metric: took 37.875625ms for pod "etcd-ha-600773" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:03.819116 1244393 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-600773-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:03.819200 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-600773-m02
	I1007 12:30:03.819226 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:03.819250 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:03.819273 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:03.830904 1244393 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1007 12:30:03.831883 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m02
	I1007 12:30:03.831901 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:03.831920 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:03.831946 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:03.839282 1244393 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:30:03.840231 1244393 pod_ready.go:93] pod "etcd-ha-600773-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:30:03.840290 1244393 pod_ready.go:82] duration metric: took 21.152998ms for pod "etcd-ha-600773-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:03.840318 1244393 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-600773-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:03.840425 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-600773-m03
	I1007 12:30:03.840452 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:03.840474 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:03.840494 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:03.853765 1244393 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1007 12:30:03.854644 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m03
	I1007 12:30:03.854692 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:03.854717 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:03.854739 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:03.867282 1244393 round_trippers.go:574] Response Status: 404 Not Found in 12 milliseconds
	I1007 12:30:03.867615 1244393 pod_ready.go:98] node "ha-600773-m03" hosting pod "etcd-ha-600773-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-600773-m03": nodes "ha-600773-m03" not found
	I1007 12:30:03.867662 1244393 pod_ready.go:82] duration metric: took 27.316402ms for pod "etcd-ha-600773-m03" in "kube-system" namespace to be "Ready" ...
	E1007 12:30:03.867692 1244393 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-600773-m03" hosting pod "etcd-ha-600773-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-600773-m03": nodes "ha-600773-m03" not found
	I1007 12:30:03.867735 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-600773" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:03.976073 1244393 request.go:632] Waited for 108.251741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-600773
	I1007 12:30:03.976158 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-600773
	I1007 12:30:03.976214 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:03.976227 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:03.976232 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:03.986294 1244393 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1007 12:30:04.175952 1244393 request.go:632] Waited for 188.339457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:30:04.176006 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:30:04.176011 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:04.176020 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:04.176025 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:04.178914 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:30:04.180161 1244393 pod_ready.go:93] pod "kube-apiserver-ha-600773" in "kube-system" namespace has status "Ready":"True"
	I1007 12:30:04.180215 1244393 pod_ready.go:82] duration metric: took 312.453344ms for pod "kube-apiserver-ha-600773" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:04.180267 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-600773-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:04.375344 1244393 request.go:632] Waited for 194.989707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-600773-m02
	I1007 12:30:04.375443 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-600773-m02
	I1007 12:30:04.375471 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:04.375498 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:04.375520 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:04.389681 1244393 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1007 12:30:04.575632 1244393 request.go:632] Waited for 183.629728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773-m02
	I1007 12:30:04.575726 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m02
	I1007 12:30:04.575748 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:04.575779 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:04.575809 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:04.582209 1244393 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:30:04.582854 1244393 pod_ready.go:93] pod "kube-apiserver-ha-600773-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:30:04.582906 1244393 pod_ready.go:82] duration metric: took 402.608619ms for pod "kube-apiserver-ha-600773-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:04.582935 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-600773-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:04.775250 1244393 request.go:632] Waited for 192.234719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-600773-m03
	I1007 12:30:04.775367 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-600773-m03
	I1007 12:30:04.775390 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:04.775422 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:04.775444 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:04.791090 1244393 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1007 12:30:04.975238 1244393 request.go:632] Waited for 183.139442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773-m03
	I1007 12:30:04.975342 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m03
	I1007 12:30:04.975363 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:04.975391 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:04.975416 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:04.990886 1244393 round_trippers.go:574] Response Status: 404 Not Found in 15 milliseconds
	I1007 12:30:04.991049 1244393 pod_ready.go:98] node "ha-600773-m03" hosting pod "kube-apiserver-ha-600773-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-600773-m03": nodes "ha-600773-m03" not found
	I1007 12:30:04.991087 1244393 pod_ready.go:82] duration metric: took 408.131445ms for pod "kube-apiserver-ha-600773-m03" in "kube-system" namespace to be "Ready" ...
	E1007 12:30:04.991112 1244393 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-600773-m03" hosting pod "kube-apiserver-ha-600773-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-600773-m03": nodes "ha-600773-m03" not found
	I1007 12:30:04.991134 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-600773" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:05.175445 1244393 request.go:632] Waited for 184.224595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-600773
	I1007 12:30:05.175986 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-600773
	I1007 12:30:05.176009 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:05.176047 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:05.176071 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:05.185252 1244393 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:30:05.375240 1244393 request.go:632] Waited for 189.23522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:30:05.375347 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:30:05.375372 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:05.375398 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:05.375419 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:05.380426 1244393 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:30:05.381046 1244393 pod_ready.go:93] pod "kube-controller-manager-ha-600773" in "kube-system" namespace has status "Ready":"True"
	I1007 12:30:05.381096 1244393 pod_ready.go:82] duration metric: took 389.934936ms for pod "kube-controller-manager-ha-600773" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:05.381125 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-600773-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:05.575314 1244393 request.go:632] Waited for 194.104284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-600773-m02
	I1007 12:30:05.575412 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-600773-m02
	I1007 12:30:05.575446 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:05.575466 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:05.575472 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:05.578812 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:30:05.775828 1244393 request.go:632] Waited for 196.295994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773-m02
	I1007 12:30:05.775940 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m02
	I1007 12:30:05.775967 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:05.775993 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:05.776015 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:05.781972 1244393 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:30:05.782588 1244393 pod_ready.go:93] pod "kube-controller-manager-ha-600773-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:30:05.782643 1244393 pod_ready.go:82] duration metric: took 401.489516ms for pod "kube-controller-manager-ha-600773-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:05.782671 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-600773-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:05.976046 1244393 request.go:632] Waited for 193.293394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-600773-m03
	I1007 12:30:05.976143 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-600773-m03
	I1007 12:30:05.976188 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:05.976212 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:05.976254 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:05.985941 1244393 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:30:06.175231 1244393 request.go:632] Waited for 188.243426ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773-m03
	I1007 12:30:06.175337 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m03
	I1007 12:30:06.175363 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:06.175397 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:06.175419 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:06.195076 1244393 round_trippers.go:574] Response Status: 404 Not Found in 19 milliseconds
	I1007 12:30:06.195423 1244393 pod_ready.go:98] node "ha-600773-m03" hosting pod "kube-controller-manager-ha-600773-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-600773-m03": nodes "ha-600773-m03" not found
	I1007 12:30:06.195473 1244393 pod_ready.go:82] duration metric: took 412.780303ms for pod "kube-controller-manager-ha-600773-m03" in "kube-system" namespace to be "Ready" ...
	E1007 12:30:06.195499 1244393 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-600773-m03" hosting pod "kube-controller-manager-ha-600773-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-600773-m03": nodes "ha-600773-m03" not found
	I1007 12:30:06.195521 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-449qt" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:06.375779 1244393 request.go:632] Waited for 180.170663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-449qt
	I1007 12:30:06.375876 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-449qt
	I1007 12:30:06.375900 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:06.375926 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:06.375958 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:06.378908 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:30:06.576238 1244393 request.go:632] Waited for 196.332483ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773-m03
	I1007 12:30:06.576391 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m03
	I1007 12:30:06.576401 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:06.576410 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:06.576414 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:06.579136 1244393 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1007 12:30:06.579454 1244393 pod_ready.go:98] node "ha-600773-m03" hosting pod "kube-proxy-449qt" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-600773-m03": nodes "ha-600773-m03" not found
	I1007 12:30:06.579513 1244393 pod_ready.go:82] duration metric: took 383.964566ms for pod "kube-proxy-449qt" in "kube-system" namespace to be "Ready" ...
	E1007 12:30:06.579538 1244393 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-600773-m03" hosting pod "kube-proxy-449qt" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-600773-m03": nodes "ha-600773-m03" not found
	I1007 12:30:06.579566 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gnxd8" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:06.775985 1244393 request.go:632] Waited for 196.317164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnxd8
	I1007 12:30:06.776091 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnxd8
	I1007 12:30:06.776115 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:06.776149 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:06.776173 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:06.781726 1244393 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:30:06.976084 1244393 request.go:632] Waited for 193.319019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773-m04
	I1007 12:30:06.976205 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m04
	I1007 12:30:06.976222 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:06.976232 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:06.976266 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:06.979209 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:30:06.980215 1244393 pod_ready.go:93] pod "kube-proxy-gnxd8" in "kube-system" namespace has status "Ready":"True"
	I1007 12:30:06.980316 1244393 pod_ready.go:82] duration metric: took 400.722779ms for pod "kube-proxy-gnxd8" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:06.980346 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rvn82" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:07.175710 1244393 request.go:632] Waited for 195.247758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rvn82
	I1007 12:30:07.175814 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rvn82
	I1007 12:30:07.175835 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:07.175873 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:07.175903 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:07.183736 1244393 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:30:07.375322 1244393 request.go:632] Waited for 190.247069ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:30:07.375427 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:30:07.375460 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:07.375486 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:07.375509 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:07.378420 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:30:07.378981 1244393 pod_ready.go:93] pod "kube-proxy-rvn82" in "kube-system" namespace has status "Ready":"True"
	I1007 12:30:07.379029 1244393 pod_ready.go:82] duration metric: took 398.641377ms for pod "kube-proxy-rvn82" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:07.379056 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vf8ng" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:07.575404 1244393 request.go:632] Waited for 196.236542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vf8ng
	I1007 12:30:07.575513 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vf8ng
	I1007 12:30:07.575548 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:07.575575 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:07.575593 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:07.578410 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:30:07.775769 1244393 request.go:632] Waited for 196.303364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773-m02
	I1007 12:30:07.775872 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m02
	I1007 12:30:07.775906 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:07.775931 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:07.775950 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:07.779275 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:30:07.779945 1244393 pod_ready.go:93] pod "kube-proxy-vf8ng" in "kube-system" namespace has status "Ready":"True"
	I1007 12:30:07.779984 1244393 pod_ready.go:82] duration metric: took 400.907689ms for pod "kube-proxy-vf8ng" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:07.780011 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-600773" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:07.975315 1244393 request.go:632] Waited for 195.192833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-600773
	I1007 12:30:07.975430 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-600773
	I1007 12:30:07.975460 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:07.975490 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:07.975513 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:07.978502 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:30:08.175739 1244393 request.go:632] Waited for 196.102069ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:30:08.175869 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:30:08.175895 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:08.175917 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:08.175939 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:08.178938 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:30:08.180016 1244393 pod_ready.go:93] pod "kube-scheduler-ha-600773" in "kube-system" namespace has status "Ready":"True"
	I1007 12:30:08.180081 1244393 pod_ready.go:82] duration metric: took 400.048628ms for pod "kube-scheduler-ha-600773" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:08.180109 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-600773-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:08.375358 1244393 request.go:632] Waited for 195.150496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-600773-m02
	I1007 12:30:08.375421 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-600773-m02
	I1007 12:30:08.375429 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:08.375446 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:08.375471 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:08.378453 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:30:08.576295 1244393 request.go:632] Waited for 197.232982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773-m02
	I1007 12:30:08.576360 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m02
	I1007 12:30:08.576367 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:08.576376 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:08.576382 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:08.579976 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:30:08.580596 1244393 pod_ready.go:93] pod "kube-scheduler-ha-600773-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:30:08.580620 1244393 pod_ready.go:82] duration metric: took 400.490496ms for pod "kube-scheduler-ha-600773-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:08.580635 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-600773-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:30:08.776078 1244393 request.go:632] Waited for 195.374206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-600773-m03
	I1007 12:30:08.776148 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-600773-m03
	I1007 12:30:08.776159 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:08.776169 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:08.776174 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:08.779141 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:30:08.975281 1244393 request.go:632] Waited for 195.238429ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773-m03
	I1007 12:30:08.975344 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m03
	I1007 12:30:08.975353 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:08.975362 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:08.975369 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:08.977965 1244393 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1007 12:30:08.978164 1244393 pod_ready.go:98] node "ha-600773-m03" hosting pod "kube-scheduler-ha-600773-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-600773-m03": nodes "ha-600773-m03" not found
	I1007 12:30:08.978181 1244393 pod_ready.go:82] duration metric: took 397.538627ms for pod "kube-scheduler-ha-600773-m03" in "kube-system" namespace to be "Ready" ...
	E1007 12:30:08.978192 1244393 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-600773-m03" hosting pod "kube-scheduler-ha-600773-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-600773-m03": nodes "ha-600773-m03" not found
	I1007 12:30:08.978205 1244393 pod_ready.go:39] duration metric: took 6.497873378s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:30:08.978220 1244393 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:30:08.978284 1244393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:30:08.990281 1244393 api_server.go:72] duration metric: took 24.536873822s to wait for apiserver process to appear ...
	I1007 12:30:08.990307 1244393 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:30:08.990329 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:08.997957 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:08.997999 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:09.490572 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:09.498190 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:09.498232 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:09.990868 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:10.000915 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:10.000953 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:10.490418 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:10.498191 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:10.498220 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:10.990454 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:10.999014 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:10.999116 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:11.490610 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:11.498203 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:11.498237 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:11.990748 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:11.998392 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:11.998420 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:12.491110 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:12.498775 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:12.498802 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:12.991369 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:13.000066 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:13.000100 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:13.490452 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:13.498540 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:13.498581 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:13.991308 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:13.999098 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:13.999129 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:14.490456 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:14.498134 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:14.498178 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:14.991137 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:14.998936 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:14.998962 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:15.491365 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:15.499985 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:15.500034 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:15.990603 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:15.998134 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:15.998160 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:16.490455 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:16.498182 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:16.498221 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:16.990448 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:16.997983 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:16.998019 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:17.491311 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:17.506597 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:17.506668 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:17.990859 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:17.998627 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:17.998755 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:18.491398 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:18.502692 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:18.502737 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:18.991388 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:18.999239 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:18.999275 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:19.490464 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:19.498105 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:19.498180 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:19.990479 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:19.998317 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:19.998345 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:20.490535 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:20.498353 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:20.498392 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:20.990884 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:20.998924 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:20.999009 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:21.490533 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:21.498175 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:21.498202 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:21.990658 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:21.998329 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:21.998369 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:22.490758 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:22.498543 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:22.498583 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:22.991161 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:23.000857 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:23.000963 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:23.490497 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:23.498434 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:23.498460 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:23.991084 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:23.998999 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:23.999031 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:24.490534 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:24.498312 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:24.498351 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:24.990849 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:25.013516 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:25.013713 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:25.491269 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:25.520824 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:25.520919 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:25.991072 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:26.016798 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:26.016881 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:26.491357 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:26.501072 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:26.501155 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:26.990441 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:26.998985 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:26.999073 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:27.490618 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:27.499000 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:27.499031 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:27.990447 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:27.999287 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:27.999374 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:28.491016 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:28.498667 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:28.498694 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:28.991332 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:29.001156 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:29.001191 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:29.490454 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:29.498090 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:29.498124 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:29.990449 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:30.036468 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:30.036504 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:30.491079 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:30.498704 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:30.498736 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:30.991356 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:31.000933 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:31.001034 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:31.490447 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:31.498111 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:31.498139 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:31.990944 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:31.998694 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:31.998726 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:32.491380 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:32.499244 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:32.499272 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:32.990796 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:32.998509 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:32.998550 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:33.490746 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:33.498773 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:33.498815 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:33.990405 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:33.998280 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:33.998311 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:34.491022 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:34.498666 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:34.498695 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:34.990577 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:34.998456 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:34.998494 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:35.491072 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:35.498865 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:35.498898 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:35.990485 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:35.998106 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:35.998137 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:36.491410 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:36.499265 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:36.499299 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:36.990455 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:36.998275 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:36.998306 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:37.491047 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:37.498954 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:37.498986 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:37.990495 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:37.998319 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:37.998354 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:38.491059 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:38.498891 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:38.498933 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:38.990951 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:38.998807 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:38.998839 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:39.491398 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:39.499411 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:39.499440 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:39.991233 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:40.003995 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:40.004038 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:40.490520 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:40.500366 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:40.500408 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:40.991022 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:40.998798 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:40.998831 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:41.490481 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:41.501024 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:41.501069 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:41.990472 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:41.998082 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:41.998113 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:42.490748 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:42.498596 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:42.498622 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:42.991088 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:42.999137 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:42.999167 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:43.490473 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:43.499223 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:43.499256 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:43.990471 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:43.998256 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:30:43.998290 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:30:44.491032 1244393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 12:30:44.491118 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 12:30:44.539206 1244393 cri.go:89] found id: "1f16f141ffa0cacf75c49ae5e7d5a42827b698c639e85669aa1bbc541d482b26"
	I1007 12:30:44.539226 1244393 cri.go:89] found id: "e2d8e16ab6e4416b2ec1e038d834f007b4293b8238bd1fc35aad05802aa01734"
	I1007 12:30:44.539230 1244393 cri.go:89] found id: ""
	I1007 12:30:44.539237 1244393 logs.go:282] 2 containers: [1f16f141ffa0cacf75c49ae5e7d5a42827b698c639e85669aa1bbc541d482b26 e2d8e16ab6e4416b2ec1e038d834f007b4293b8238bd1fc35aad05802aa01734]
	I1007 12:30:44.539293 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:44.543853 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:44.547789 1244393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 12:30:44.547869 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 12:30:44.589700 1244393 cri.go:89] found id: "7de5972f18060687902a6f038e29c3e28fda98bd6ac03b0ebd1b3ac5222de288"
	I1007 12:30:44.589722 1244393 cri.go:89] found id: "7df6851264e535e9d4b12438e531f76bb9e20a04e17b97191644bd4787e9ca37"
	I1007 12:30:44.589727 1244393 cri.go:89] found id: ""
	I1007 12:30:44.589733 1244393 logs.go:282] 2 containers: [7de5972f18060687902a6f038e29c3e28fda98bd6ac03b0ebd1b3ac5222de288 7df6851264e535e9d4b12438e531f76bb9e20a04e17b97191644bd4787e9ca37]
	I1007 12:30:44.589798 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:44.593287 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:44.596587 1244393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 12:30:44.596664 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 12:30:44.640021 1244393 cri.go:89] found id: ""
	I1007 12:30:44.640057 1244393 logs.go:282] 0 containers: []
	W1007 12:30:44.640066 1244393 logs.go:284] No container was found matching "coredns"
	I1007 12:30:44.640072 1244393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 12:30:44.640150 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 12:30:44.680179 1244393 cri.go:89] found id: "3dbdd97e20c40140072af942004c6ec3e353b78c02ebf370bdd4d3eb1befad9c"
	I1007 12:30:44.680204 1244393 cri.go:89] found id: "66219da96ba7dea0d36a087d7fc164c2d7acb83280af213191b7c52ee5a2593f"
	I1007 12:30:44.680210 1244393 cri.go:89] found id: ""
	I1007 12:30:44.680217 1244393 logs.go:282] 2 containers: [3dbdd97e20c40140072af942004c6ec3e353b78c02ebf370bdd4d3eb1befad9c 66219da96ba7dea0d36a087d7fc164c2d7acb83280af213191b7c52ee5a2593f]
	I1007 12:30:44.680301 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:44.683983 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:44.687398 1244393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 12:30:44.687466 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 12:30:44.737450 1244393 cri.go:89] found id: "30f45f5fc73fe3417aa659960eab7d7f88e0528a065317fa85e7db0a718bf033"
	I1007 12:30:44.737523 1244393 cri.go:89] found id: ""
	I1007 12:30:44.737545 1244393 logs.go:282] 1 containers: [30f45f5fc73fe3417aa659960eab7d7f88e0528a065317fa85e7db0a718bf033]
	I1007 12:30:44.737623 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:44.741531 1244393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 12:30:44.741610 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 12:30:44.786861 1244393 cri.go:89] found id: "315a8f4c0c3821ad67e9d1c352c449e3b08a2ae3a8382e116a164e69a4f2f5f9"
	I1007 12:30:44.786883 1244393 cri.go:89] found id: "271206a187b74974405eaf7cafad84a5ac8b74db5258ceebd28e27bb590dabf5"
	I1007 12:30:44.786888 1244393 cri.go:89] found id: ""
	I1007 12:30:44.786895 1244393 logs.go:282] 2 containers: [315a8f4c0c3821ad67e9d1c352c449e3b08a2ae3a8382e116a164e69a4f2f5f9 271206a187b74974405eaf7cafad84a5ac8b74db5258ceebd28e27bb590dabf5]
	I1007 12:30:44.786983 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:44.790694 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:44.794177 1244393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 12:30:44.794282 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 12:30:44.836901 1244393 cri.go:89] found id: "b36175d2b15ab522aa15745f28907cfcd54f379c1abd94eb4730513425daf086"
	I1007 12:30:44.836929 1244393 cri.go:89] found id: ""
	I1007 12:30:44.836937 1244393 logs.go:282] 1 containers: [b36175d2b15ab522aa15745f28907cfcd54f379c1abd94eb4730513425daf086]
	I1007 12:30:44.837000 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:44.841616 1244393 logs.go:123] Gathering logs for etcd [7df6851264e535e9d4b12438e531f76bb9e20a04e17b97191644bd4787e9ca37] ...
	I1007 12:30:44.841649 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7df6851264e535e9d4b12438e531f76bb9e20a04e17b97191644bd4787e9ca37"
	I1007 12:30:44.895520 1244393 logs.go:123] Gathering logs for kube-controller-manager [271206a187b74974405eaf7cafad84a5ac8b74db5258ceebd28e27bb590dabf5] ...
	I1007 12:30:44.895562 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 271206a187b74974405eaf7cafad84a5ac8b74db5258ceebd28e27bb590dabf5"
	I1007 12:30:44.941024 1244393 logs.go:123] Gathering logs for kindnet [b36175d2b15ab522aa15745f28907cfcd54f379c1abd94eb4730513425daf086] ...
	I1007 12:30:44.941051 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b36175d2b15ab522aa15745f28907cfcd54f379c1abd94eb4730513425daf086"
	I1007 12:30:44.983155 1244393 logs.go:123] Gathering logs for dmesg ...
	I1007 12:30:44.983182 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 12:30:45.004617 1244393 logs.go:123] Gathering logs for kube-apiserver [1f16f141ffa0cacf75c49ae5e7d5a42827b698c639e85669aa1bbc541d482b26] ...
	I1007 12:30:45.004666 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f16f141ffa0cacf75c49ae5e7d5a42827b698c639e85669aa1bbc541d482b26"
	I1007 12:30:45.104315 1244393 logs.go:123] Gathering logs for kube-scheduler [3dbdd97e20c40140072af942004c6ec3e353b78c02ebf370bdd4d3eb1befad9c] ...
	I1007 12:30:45.104353 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dbdd97e20c40140072af942004c6ec3e353b78c02ebf370bdd4d3eb1befad9c"
	I1007 12:30:45.193602 1244393 logs.go:123] Gathering logs for kube-scheduler [66219da96ba7dea0d36a087d7fc164c2d7acb83280af213191b7c52ee5a2593f] ...
	I1007 12:30:45.193633 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66219da96ba7dea0d36a087d7fc164c2d7acb83280af213191b7c52ee5a2593f"
	I1007 12:30:45.294131 1244393 logs.go:123] Gathering logs for kube-proxy [30f45f5fc73fe3417aa659960eab7d7f88e0528a065317fa85e7db0a718bf033] ...
	I1007 12:30:45.294228 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30f45f5fc73fe3417aa659960eab7d7f88e0528a065317fa85e7db0a718bf033"
	I1007 12:30:45.352716 1244393 logs.go:123] Gathering logs for kube-controller-manager [315a8f4c0c3821ad67e9d1c352c449e3b08a2ae3a8382e116a164e69a4f2f5f9] ...
	I1007 12:30:45.352797 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 315a8f4c0c3821ad67e9d1c352c449e3b08a2ae3a8382e116a164e69a4f2f5f9"
	I1007 12:30:45.423124 1244393 logs.go:123] Gathering logs for kubelet ...
	I1007 12:30:45.423165 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 12:30:45.512134 1244393 logs.go:123] Gathering logs for describe nodes ...
	I1007 12:30:45.512171 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 12:30:45.771265 1244393 logs.go:123] Gathering logs for CRI-O ...
	I1007 12:30:45.771304 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 12:30:45.844041 1244393 logs.go:123] Gathering logs for container status ...
	I1007 12:30:45.844119 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 12:30:45.888627 1244393 logs.go:123] Gathering logs for kube-apiserver [e2d8e16ab6e4416b2ec1e038d834f007b4293b8238bd1fc35aad05802aa01734] ...
	I1007 12:30:45.888663 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d8e16ab6e4416b2ec1e038d834f007b4293b8238bd1fc35aad05802aa01734"
	I1007 12:30:45.927377 1244393 logs.go:123] Gathering logs for etcd [7de5972f18060687902a6f038e29c3e28fda98bd6ac03b0ebd1b3ac5222de288] ...
	I1007 12:30:45.927404 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7de5972f18060687902a6f038e29c3e28fda98bd6ac03b0ebd1b3ac5222de288"
	I1007 12:30:48.479594 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:50.796691 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1007 12:30:50.796717 1244393 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1007 12:30:50.796744 1244393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 12:30:50.796807 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 12:30:50.887700 1244393 cri.go:89] found id: "1f16f141ffa0cacf75c49ae5e7d5a42827b698c639e85669aa1bbc541d482b26"
	I1007 12:30:50.887719 1244393 cri.go:89] found id: "e2d8e16ab6e4416b2ec1e038d834f007b4293b8238bd1fc35aad05802aa01734"
	I1007 12:30:50.887725 1244393 cri.go:89] found id: ""
	I1007 12:30:50.887732 1244393 logs.go:282] 2 containers: [1f16f141ffa0cacf75c49ae5e7d5a42827b698c639e85669aa1bbc541d482b26 e2d8e16ab6e4416b2ec1e038d834f007b4293b8238bd1fc35aad05802aa01734]
	I1007 12:30:50.887789 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:50.894587 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:50.901097 1244393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 12:30:50.901174 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 12:30:50.991169 1244393 cri.go:89] found id: "7de5972f18060687902a6f038e29c3e28fda98bd6ac03b0ebd1b3ac5222de288"
	I1007 12:30:50.991193 1244393 cri.go:89] found id: "7df6851264e535e9d4b12438e531f76bb9e20a04e17b97191644bd4787e9ca37"
	I1007 12:30:50.991199 1244393 cri.go:89] found id: ""
	I1007 12:30:50.991206 1244393 logs.go:282] 2 containers: [7de5972f18060687902a6f038e29c3e28fda98bd6ac03b0ebd1b3ac5222de288 7df6851264e535e9d4b12438e531f76bb9e20a04e17b97191644bd4787e9ca37]
	I1007 12:30:50.991265 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:50.998906 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:51.011787 1244393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 12:30:51.011890 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 12:30:51.078053 1244393 cri.go:89] found id: ""
	I1007 12:30:51.078097 1244393 logs.go:282] 0 containers: []
	W1007 12:30:51.078108 1244393 logs.go:284] No container was found matching "coredns"
	I1007 12:30:51.078116 1244393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 12:30:51.078184 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 12:30:51.147132 1244393 cri.go:89] found id: "3dbdd97e20c40140072af942004c6ec3e353b78c02ebf370bdd4d3eb1befad9c"
	I1007 12:30:51.147155 1244393 cri.go:89] found id: "66219da96ba7dea0d36a087d7fc164c2d7acb83280af213191b7c52ee5a2593f"
	I1007 12:30:51.147166 1244393 cri.go:89] found id: ""
	I1007 12:30:51.147175 1244393 logs.go:282] 2 containers: [3dbdd97e20c40140072af942004c6ec3e353b78c02ebf370bdd4d3eb1befad9c 66219da96ba7dea0d36a087d7fc164c2d7acb83280af213191b7c52ee5a2593f]
	I1007 12:30:51.147235 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:51.151359 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:51.155622 1244393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 12:30:51.155698 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 12:30:51.196068 1244393 cri.go:89] found id: "30f45f5fc73fe3417aa659960eab7d7f88e0528a065317fa85e7db0a718bf033"
	I1007 12:30:51.196092 1244393 cri.go:89] found id: ""
	I1007 12:30:51.196100 1244393 logs.go:282] 1 containers: [30f45f5fc73fe3417aa659960eab7d7f88e0528a065317fa85e7db0a718bf033]
	I1007 12:30:51.196156 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:51.199802 1244393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 12:30:51.199888 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 12:30:51.244427 1244393 cri.go:89] found id: "315a8f4c0c3821ad67e9d1c352c449e3b08a2ae3a8382e116a164e69a4f2f5f9"
	I1007 12:30:51.244447 1244393 cri.go:89] found id: "271206a187b74974405eaf7cafad84a5ac8b74db5258ceebd28e27bb590dabf5"
	I1007 12:30:51.244452 1244393 cri.go:89] found id: ""
	I1007 12:30:51.244459 1244393 logs.go:282] 2 containers: [315a8f4c0c3821ad67e9d1c352c449e3b08a2ae3a8382e116a164e69a4f2f5f9 271206a187b74974405eaf7cafad84a5ac8b74db5258ceebd28e27bb590dabf5]
	I1007 12:30:51.244513 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:51.247842 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:51.251046 1244393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 12:30:51.251115 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 12:30:51.297282 1244393 cri.go:89] found id: "b36175d2b15ab522aa15745f28907cfcd54f379c1abd94eb4730513425daf086"
	I1007 12:30:51.297305 1244393 cri.go:89] found id: ""
	I1007 12:30:51.297315 1244393 logs.go:282] 1 containers: [b36175d2b15ab522aa15745f28907cfcd54f379c1abd94eb4730513425daf086]
	I1007 12:30:51.297373 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:51.301240 1244393 logs.go:123] Gathering logs for kube-scheduler [66219da96ba7dea0d36a087d7fc164c2d7acb83280af213191b7c52ee5a2593f] ...
	I1007 12:30:51.301266 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66219da96ba7dea0d36a087d7fc164c2d7acb83280af213191b7c52ee5a2593f"
	I1007 12:30:51.348162 1244393 logs.go:123] Gathering logs for kube-proxy [30f45f5fc73fe3417aa659960eab7d7f88e0528a065317fa85e7db0a718bf033] ...
	I1007 12:30:51.348190 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30f45f5fc73fe3417aa659960eab7d7f88e0528a065317fa85e7db0a718bf033"
	I1007 12:30:51.406192 1244393 logs.go:123] Gathering logs for kube-controller-manager [271206a187b74974405eaf7cafad84a5ac8b74db5258ceebd28e27bb590dabf5] ...
	I1007 12:30:51.406222 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 271206a187b74974405eaf7cafad84a5ac8b74db5258ceebd28e27bb590dabf5"
	I1007 12:30:51.467908 1244393 logs.go:123] Gathering logs for CRI-O ...
	I1007 12:30:51.467935 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 12:30:51.544105 1244393 logs.go:123] Gathering logs for container status ...
	I1007 12:30:51.544140 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 12:30:51.621137 1244393 logs.go:123] Gathering logs for kubelet ...
	I1007 12:30:51.621171 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 12:30:51.710872 1244393 logs.go:123] Gathering logs for describe nodes ...
	I1007 12:30:51.710912 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 12:30:52.109387 1244393 logs.go:123] Gathering logs for etcd [7de5972f18060687902a6f038e29c3e28fda98bd6ac03b0ebd1b3ac5222de288] ...
	I1007 12:30:52.109426 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7de5972f18060687902a6f038e29c3e28fda98bd6ac03b0ebd1b3ac5222de288"
	I1007 12:30:52.181326 1244393 logs.go:123] Gathering logs for etcd [7df6851264e535e9d4b12438e531f76bb9e20a04e17b97191644bd4787e9ca37] ...
	I1007 12:30:52.181362 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7df6851264e535e9d4b12438e531f76bb9e20a04e17b97191644bd4787e9ca37"
	I1007 12:30:52.253778 1244393 logs.go:123] Gathering logs for kube-controller-manager [315a8f4c0c3821ad67e9d1c352c449e3b08a2ae3a8382e116a164e69a4f2f5f9] ...
	I1007 12:30:52.253814 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 315a8f4c0c3821ad67e9d1c352c449e3b08a2ae3a8382e116a164e69a4f2f5f9"
	I1007 12:30:52.316508 1244393 logs.go:123] Gathering logs for dmesg ...
	I1007 12:30:52.316549 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 12:30:52.333062 1244393 logs.go:123] Gathering logs for kube-apiserver [e2d8e16ab6e4416b2ec1e038d834f007b4293b8238bd1fc35aad05802aa01734] ...
	I1007 12:30:52.333101 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d8e16ab6e4416b2ec1e038d834f007b4293b8238bd1fc35aad05802aa01734"
	I1007 12:30:52.383481 1244393 logs.go:123] Gathering logs for kindnet [b36175d2b15ab522aa15745f28907cfcd54f379c1abd94eb4730513425daf086] ...
	I1007 12:30:52.383511 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b36175d2b15ab522aa15745f28907cfcd54f379c1abd94eb4730513425daf086"
	I1007 12:30:52.432537 1244393 logs.go:123] Gathering logs for kube-apiserver [1f16f141ffa0cacf75c49ae5e7d5a42827b698c639e85669aa1bbc541d482b26] ...
	I1007 12:30:52.432571 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f16f141ffa0cacf75c49ae5e7d5a42827b698c639e85669aa1bbc541d482b26"
	I1007 12:30:52.479843 1244393 logs.go:123] Gathering logs for kube-scheduler [3dbdd97e20c40140072af942004c6ec3e353b78c02ebf370bdd4d3eb1befad9c] ...
	I1007 12:30:52.479916 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dbdd97e20c40140072af942004c6ec3e353b78c02ebf370bdd4d3eb1befad9c"
	I1007 12:30:55.019886 1244393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 12:30:55.031699 1244393 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1007 12:30:55.031798 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1007 12:30:55.031814 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:55.031833 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:55.031839 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:55.048034 1244393 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1007 12:30:55.048224 1244393 api_server.go:141] control plane version: v1.31.1
	I1007 12:30:55.048395 1244393 api_server.go:131] duration metric: took 46.058075203s to wait for apiserver health ...
	I1007 12:30:55.048429 1244393 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:30:55.048483 1244393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 12:30:55.048565 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 12:30:55.101912 1244393 cri.go:89] found id: "1f16f141ffa0cacf75c49ae5e7d5a42827b698c639e85669aa1bbc541d482b26"
	I1007 12:30:55.101936 1244393 cri.go:89] found id: "e2d8e16ab6e4416b2ec1e038d834f007b4293b8238bd1fc35aad05802aa01734"
	I1007 12:30:55.101943 1244393 cri.go:89] found id: ""
	I1007 12:30:55.101963 1244393 logs.go:282] 2 containers: [1f16f141ffa0cacf75c49ae5e7d5a42827b698c639e85669aa1bbc541d482b26 e2d8e16ab6e4416b2ec1e038d834f007b4293b8238bd1fc35aad05802aa01734]
	I1007 12:30:55.102044 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:55.106857 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:55.112065 1244393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 12:30:55.112173 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 12:30:55.161014 1244393 cri.go:89] found id: "7de5972f18060687902a6f038e29c3e28fda98bd6ac03b0ebd1b3ac5222de288"
	I1007 12:30:55.161047 1244393 cri.go:89] found id: "7df6851264e535e9d4b12438e531f76bb9e20a04e17b97191644bd4787e9ca37"
	I1007 12:30:55.161053 1244393 cri.go:89] found id: ""
	I1007 12:30:55.161061 1244393 logs.go:282] 2 containers: [7de5972f18060687902a6f038e29c3e28fda98bd6ac03b0ebd1b3ac5222de288 7df6851264e535e9d4b12438e531f76bb9e20a04e17b97191644bd4787e9ca37]
	I1007 12:30:55.161126 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:55.165602 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:55.169791 1244393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 12:30:55.169865 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 12:30:55.217617 1244393 cri.go:89] found id: ""
	I1007 12:30:55.217646 1244393 logs.go:282] 0 containers: []
	W1007 12:30:55.217656 1244393 logs.go:284] No container was found matching "coredns"
	I1007 12:30:55.217663 1244393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 12:30:55.217726 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 12:30:55.256107 1244393 cri.go:89] found id: "3dbdd97e20c40140072af942004c6ec3e353b78c02ebf370bdd4d3eb1befad9c"
	I1007 12:30:55.256133 1244393 cri.go:89] found id: "66219da96ba7dea0d36a087d7fc164c2d7acb83280af213191b7c52ee5a2593f"
	I1007 12:30:55.256137 1244393 cri.go:89] found id: ""
	I1007 12:30:55.256145 1244393 logs.go:282] 2 containers: [3dbdd97e20c40140072af942004c6ec3e353b78c02ebf370bdd4d3eb1befad9c 66219da96ba7dea0d36a087d7fc164c2d7acb83280af213191b7c52ee5a2593f]
	I1007 12:30:55.256233 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:55.260179 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:55.263834 1244393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 12:30:55.263916 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 12:30:55.305374 1244393 cri.go:89] found id: "30f45f5fc73fe3417aa659960eab7d7f88e0528a065317fa85e7db0a718bf033"
	I1007 12:30:55.305399 1244393 cri.go:89] found id: ""
	I1007 12:30:55.305416 1244393 logs.go:282] 1 containers: [30f45f5fc73fe3417aa659960eab7d7f88e0528a065317fa85e7db0a718bf033]
	I1007 12:30:55.305473 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:55.309026 1244393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 12:30:55.309121 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 12:30:55.359602 1244393 cri.go:89] found id: "315a8f4c0c3821ad67e9d1c352c449e3b08a2ae3a8382e116a164e69a4f2f5f9"
	I1007 12:30:55.359627 1244393 cri.go:89] found id: "271206a187b74974405eaf7cafad84a5ac8b74db5258ceebd28e27bb590dabf5"
	I1007 12:30:55.359632 1244393 cri.go:89] found id: ""
	I1007 12:30:55.359640 1244393 logs.go:282] 2 containers: [315a8f4c0c3821ad67e9d1c352c449e3b08a2ae3a8382e116a164e69a4f2f5f9 271206a187b74974405eaf7cafad84a5ac8b74db5258ceebd28e27bb590dabf5]
	I1007 12:30:55.359717 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:55.363490 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:55.367064 1244393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 12:30:55.367143 1244393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 12:30:55.407052 1244393 cri.go:89] found id: "b36175d2b15ab522aa15745f28907cfcd54f379c1abd94eb4730513425daf086"
	I1007 12:30:55.407090 1244393 cri.go:89] found id: ""
	I1007 12:30:55.407099 1244393 logs.go:282] 1 containers: [b36175d2b15ab522aa15745f28907cfcd54f379c1abd94eb4730513425daf086]
	I1007 12:30:55.407165 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:30:55.411411 1244393 logs.go:123] Gathering logs for etcd [7df6851264e535e9d4b12438e531f76bb9e20a04e17b97191644bd4787e9ca37] ...
	I1007 12:30:55.411438 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7df6851264e535e9d4b12438e531f76bb9e20a04e17b97191644bd4787e9ca37"
	I1007 12:30:55.471192 1244393 logs.go:123] Gathering logs for kube-scheduler [3dbdd97e20c40140072af942004c6ec3e353b78c02ebf370bdd4d3eb1befad9c] ...
	I1007 12:30:55.471230 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dbdd97e20c40140072af942004c6ec3e353b78c02ebf370bdd4d3eb1befad9c"
	I1007 12:30:55.513283 1244393 logs.go:123] Gathering logs for kube-apiserver [1f16f141ffa0cacf75c49ae5e7d5a42827b698c639e85669aa1bbc541d482b26] ...
	I1007 12:30:55.513359 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f16f141ffa0cacf75c49ae5e7d5a42827b698c639e85669aa1bbc541d482b26"
	I1007 12:30:55.562214 1244393 logs.go:123] Gathering logs for kube-apiserver [e2d8e16ab6e4416b2ec1e038d834f007b4293b8238bd1fc35aad05802aa01734] ...
	I1007 12:30:55.562247 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d8e16ab6e4416b2ec1e038d834f007b4293b8238bd1fc35aad05802aa01734"
	I1007 12:30:55.599707 1244393 logs.go:123] Gathering logs for kube-proxy [30f45f5fc73fe3417aa659960eab7d7f88e0528a065317fa85e7db0a718bf033] ...
	I1007 12:30:55.599735 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30f45f5fc73fe3417aa659960eab7d7f88e0528a065317fa85e7db0a718bf033"
	I1007 12:30:55.644045 1244393 logs.go:123] Gathering logs for kube-controller-manager [315a8f4c0c3821ad67e9d1c352c449e3b08a2ae3a8382e116a164e69a4f2f5f9] ...
	I1007 12:30:55.644073 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 315a8f4c0c3821ad67e9d1c352c449e3b08a2ae3a8382e116a164e69a4f2f5f9"
	I1007 12:30:55.716406 1244393 logs.go:123] Gathering logs for kube-controller-manager [271206a187b74974405eaf7cafad84a5ac8b74db5258ceebd28e27bb590dabf5] ...
	I1007 12:30:55.716488 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 271206a187b74974405eaf7cafad84a5ac8b74db5258ceebd28e27bb590dabf5"
	I1007 12:30:55.764537 1244393 logs.go:123] Gathering logs for kubelet ...
	I1007 12:30:55.764618 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 12:30:55.846618 1244393 logs.go:123] Gathering logs for describe nodes ...
	I1007 12:30:55.846656 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 12:30:56.160637 1244393 logs.go:123] Gathering logs for kube-scheduler [66219da96ba7dea0d36a087d7fc164c2d7acb83280af213191b7c52ee5a2593f] ...
	I1007 12:30:56.160675 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66219da96ba7dea0d36a087d7fc164c2d7acb83280af213191b7c52ee5a2593f"
	I1007 12:30:56.199083 1244393 logs.go:123] Gathering logs for dmesg ...
	I1007 12:30:56.199113 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 12:30:56.216453 1244393 logs.go:123] Gathering logs for etcd [7de5972f18060687902a6f038e29c3e28fda98bd6ac03b0ebd1b3ac5222de288] ...
	I1007 12:30:56.216484 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7de5972f18060687902a6f038e29c3e28fda98bd6ac03b0ebd1b3ac5222de288"
	I1007 12:30:56.266336 1244393 logs.go:123] Gathering logs for container status ...
	I1007 12:30:56.266394 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 12:30:56.326854 1244393 logs.go:123] Gathering logs for kindnet [b36175d2b15ab522aa15745f28907cfcd54f379c1abd94eb4730513425daf086] ...
	I1007 12:30:56.326884 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b36175d2b15ab522aa15745f28907cfcd54f379c1abd94eb4730513425daf086"
	I1007 12:30:56.366332 1244393 logs.go:123] Gathering logs for CRI-O ...
	I1007 12:30:56.366361 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 12:30:58.936885 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1007 12:30:58.936914 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:58.936924 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:58.936929 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:58.945922 1244393 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:30:58.957427 1244393 system_pods.go:59] 19 kube-system pods found
	I1007 12:30:58.957522 1244393 system_pods.go:61] "coredns-7c65d6cfc9-blfnw" [33304e72-0d8a-496c-b36d-638df0022ad1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 12:30:58.957549 1244393 system_pods.go:61] "coredns-7c65d6cfc9-jrczl" [a37b07be-4864-44d4-8c18-570b216b548a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 12:30:58.957586 1244393 system_pods.go:61] "etcd-ha-600773" [a22daecc-c679-43de-96df-284a393e3683] Running
	I1007 12:30:58.957616 1244393 system_pods.go:61] "etcd-ha-600773-m02" [46c435bc-661d-4ccf-8c88-7b23ad804c8b] Running
	I1007 12:30:58.957638 1244393 system_pods.go:61] "kindnet-4zd8h" [7e5adc21-befe-41d1-a1cf-972de529c8d0] Running
	I1007 12:30:58.957662 1244393 system_pods.go:61] "kindnet-cqxld" [3cf11318-ceb0-49ae-9845-21b9b8a16399] Running
	I1007 12:30:58.957704 1244393 system_pods.go:61] "kindnet-xtjsq" [6ba82c3d-9d5f-4c7d-84e5-b7cde09d40c9] Running
	I1007 12:30:58.957732 1244393 system_pods.go:61] "kube-apiserver-ha-600773" [1523a82c-dae8-44ae-943e-ada35422e0fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1007 12:30:58.957752 1244393 system_pods.go:61] "kube-apiserver-ha-600773-m02" [952a49cf-7419-411b-baf0-108c2b658fef] Running
	I1007 12:30:58.957776 1244393 system_pods.go:61] "kube-controller-manager-ha-600773" [a1ad4719-8da9-4fa7-acfc-d9bd4efb1623] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1007 12:30:58.957812 1244393 system_pods.go:61] "kube-controller-manager-ha-600773-m02" [74cf4371-3c51-4b64-8c41-a80271912a70] Running
	I1007 12:30:58.957840 1244393 system_pods.go:61] "kube-proxy-gnxd8" [083888e8-be83-4df6-bea7-38cf412588b0] Running
	I1007 12:30:58.957861 1244393 system_pods.go:61] "kube-proxy-rvn82" [79fa2c21-5068-4810-89d8-fe84764d878b] Running
	I1007 12:30:58.957883 1244393 system_pods.go:61] "kube-proxy-vf8ng" [1fe46e1d-dbbd-46e2-b1aa-4cd083469f0a] Running
	I1007 12:30:58.957916 1244393 system_pods.go:61] "kube-scheduler-ha-600773" [7f7f6892-e4f0-4510-837a-cf8360a0d498] Running
	I1007 12:30:58.957939 1244393 system_pods.go:61] "kube-scheduler-ha-600773-m02" [611dfb5b-fd2c-456e-b5a3-f236c4525e88] Running
	I1007 12:30:58.957959 1244393 system_pods.go:61] "kube-vip-ha-600773" [ae980b66-6c09-48a8-9443-cddcecbc56e0] Running
	I1007 12:30:58.957979 1244393 system_pods.go:61] "kube-vip-ha-600773-m02" [e4b84c55-10bf-4db3-b5a4-599348511945] Running
	I1007 12:30:58.958001 1244393 system_pods.go:61] "storage-provisioner" [e2a921c6-bb50-43c3-8d29-8a927f351793] Running
	I1007 12:30:58.958036 1244393 system_pods.go:74] duration metric: took 3.909598052s to wait for pod list to return data ...
	I1007 12:30:58.958060 1244393 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:30:58.958182 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:30:58.958207 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:58.958241 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:58.958260 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:58.977127 1244393 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1007 12:30:58.977641 1244393 default_sa.go:45] found service account: "default"
	I1007 12:30:58.977660 1244393 default_sa.go:55] duration metric: took 19.580033ms for default service account to be created ...
	I1007 12:30:58.977670 1244393 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:30:58.977737 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1007 12:30:58.977742 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:58.977749 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:58.977753 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:58.986897 1244393 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:30:58.996411 1244393 system_pods.go:86] 19 kube-system pods found
	I1007 12:30:58.996492 1244393 system_pods.go:89] "coredns-7c65d6cfc9-blfnw" [33304e72-0d8a-496c-b36d-638df0022ad1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 12:30:58.996523 1244393 system_pods.go:89] "coredns-7c65d6cfc9-jrczl" [a37b07be-4864-44d4-8c18-570b216b548a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 12:30:58.996563 1244393 system_pods.go:89] "etcd-ha-600773" [a22daecc-c679-43de-96df-284a393e3683] Running
	I1007 12:30:58.996591 1244393 system_pods.go:89] "etcd-ha-600773-m02" [46c435bc-661d-4ccf-8c88-7b23ad804c8b] Running
	I1007 12:30:58.996612 1244393 system_pods.go:89] "kindnet-4zd8h" [7e5adc21-befe-41d1-a1cf-972de529c8d0] Running
	I1007 12:30:58.996634 1244393 system_pods.go:89] "kindnet-cqxld" [3cf11318-ceb0-49ae-9845-21b9b8a16399] Running
	I1007 12:30:58.996667 1244393 system_pods.go:89] "kindnet-xtjsq" [6ba82c3d-9d5f-4c7d-84e5-b7cde09d40c9] Running
	I1007 12:30:58.996691 1244393 system_pods.go:89] "kube-apiserver-ha-600773" [1523a82c-dae8-44ae-943e-ada35422e0fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1007 12:30:58.996710 1244393 system_pods.go:89] "kube-apiserver-ha-600773-m02" [952a49cf-7419-411b-baf0-108c2b658fef] Running
	I1007 12:30:58.996750 1244393 system_pods.go:89] "kube-controller-manager-ha-600773" [a1ad4719-8da9-4fa7-acfc-d9bd4efb1623] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1007 12:30:58.996779 1244393 system_pods.go:89] "kube-controller-manager-ha-600773-m02" [74cf4371-3c51-4b64-8c41-a80271912a70] Running
	I1007 12:30:58.996803 1244393 system_pods.go:89] "kube-proxy-gnxd8" [083888e8-be83-4df6-bea7-38cf412588b0] Running
	I1007 12:30:58.996824 1244393 system_pods.go:89] "kube-proxy-rvn82" [79fa2c21-5068-4810-89d8-fe84764d878b] Running
	I1007 12:30:58.996855 1244393 system_pods.go:89] "kube-proxy-vf8ng" [1fe46e1d-dbbd-46e2-b1aa-4cd083469f0a] Running
	I1007 12:30:58.996887 1244393 system_pods.go:89] "kube-scheduler-ha-600773" [7f7f6892-e4f0-4510-837a-cf8360a0d498] Running
	I1007 12:30:58.996931 1244393 system_pods.go:89] "kube-scheduler-ha-600773-m02" [611dfb5b-fd2c-456e-b5a3-f236c4525e88] Running
	I1007 12:30:58.996950 1244393 system_pods.go:89] "kube-vip-ha-600773" [ae980b66-6c09-48a8-9443-cddcecbc56e0] Running
	I1007 12:30:58.996976 1244393 system_pods.go:89] "kube-vip-ha-600773-m02" [e4b84c55-10bf-4db3-b5a4-599348511945] Running
	I1007 12:30:58.997003 1244393 system_pods.go:89] "storage-provisioner" [e2a921c6-bb50-43c3-8d29-8a927f351793] Running
	I1007 12:30:58.997028 1244393 system_pods.go:126] duration metric: took 19.351243ms to wait for k8s-apps to be running ...
	I1007 12:30:58.997050 1244393 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:30:58.997141 1244393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:30:59.023556 1244393 system_svc.go:56] duration metric: took 26.496268ms WaitForService to wait for kubelet
	I1007 12:30:59.023587 1244393 kubeadm.go:582] duration metric: took 1m14.570183719s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:30:59.023610 1244393 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:30:59.023684 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1007 12:30:59.023709 1244393 round_trippers.go:469] Request Headers:
	I1007 12:30:59.023717 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:30:59.023724 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:30:59.026803 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:30:59.028741 1244393 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 12:30:59.028776 1244393 node_conditions.go:123] node cpu capacity is 2
	I1007 12:30:59.028789 1244393 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 12:30:59.028795 1244393 node_conditions.go:123] node cpu capacity is 2
	I1007 12:30:59.028800 1244393 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 12:30:59.028805 1244393 node_conditions.go:123] node cpu capacity is 2
	I1007 12:30:59.028811 1244393 node_conditions.go:105] duration metric: took 5.195641ms to run NodePressure ...
	I1007 12:30:59.028824 1244393 start.go:241] waiting for startup goroutines ...
	I1007 12:30:59.028849 1244393 start.go:255] writing updated cluster config ...
	I1007 12:30:59.031984 1244393 out.go:201] 
	I1007 12:30:59.034717 1244393 config.go:182] Loaded profile config "ha-600773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:30:59.034833 1244393 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/config.json ...
	I1007 12:30:59.037802 1244393 out.go:177] * Starting "ha-600773-m04" worker node in "ha-600773" cluster
	I1007 12:30:59.041127 1244393 cache.go:121] Beginning downloading kic base image for docker with crio
	I1007 12:30:59.043722 1244393 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 12:30:59.045949 1244393 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:30:59.046003 1244393 cache.go:56] Caching tarball of preloaded images
	I1007 12:30:59.046034 1244393 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 12:30:59.046123 1244393 preload.go:172] Found /home/jenkins/minikube-integration/19763-1173066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1007 12:30:59.046145 1244393 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:30:59.046331 1244393 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/config.json ...
	I1007 12:30:59.064059 1244393 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1007 12:30:59.064082 1244393 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1007 12:30:59.064129 1244393 cache.go:194] Successfully downloaded all kic artifacts
	I1007 12:30:59.064159 1244393 start.go:360] acquireMachinesLock for ha-600773-m04: {Name:mk2570df22830b0245e57e18e533652a915071f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:30:59.064327 1244393 start.go:364] duration metric: took 126.226µs to acquireMachinesLock for "ha-600773-m04"
	I1007 12:30:59.064368 1244393 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:30:59.064383 1244393 fix.go:54] fixHost starting: m04
	I1007 12:30:59.064677 1244393 cli_runner.go:164] Run: docker container inspect ha-600773-m04 --format={{.State.Status}}
	I1007 12:30:59.080926 1244393 fix.go:112] recreateIfNeeded on ha-600773-m04: state=Stopped err=<nil>
	W1007 12:30:59.080955 1244393 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:30:59.084112 1244393 out.go:177] * Restarting existing docker container for "ha-600773-m04" ...
	I1007 12:30:59.086661 1244393 cli_runner.go:164] Run: docker start ha-600773-m04
	I1007 12:30:59.401005 1244393 cli_runner.go:164] Run: docker container inspect ha-600773-m04 --format={{.State.Status}}
	I1007 12:30:59.430839 1244393 kic.go:430] container "ha-600773-m04" state is running.
	I1007 12:30:59.431190 1244393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-600773-m04
	I1007 12:30:59.454633 1244393 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/config.json ...
	I1007 12:30:59.454892 1244393 machine.go:93] provisionDockerMachine start ...
	I1007 12:30:59.454958 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773-m04
	I1007 12:30:59.487529 1244393 main.go:141] libmachine: Using SSH client type: native
	I1007 12:30:59.487779 1244393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 34317 <nil> <nil>}
	I1007 12:30:59.487793 1244393 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:30:59.488460 1244393 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1007 12:31:02.624524 1244393 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-600773-m04
	
	I1007 12:31:02.624550 1244393 ubuntu.go:169] provisioning hostname "ha-600773-m04"
	I1007 12:31:02.624619 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773-m04
	I1007 12:31:02.643526 1244393 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:02.643786 1244393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 34317 <nil> <nil>}
	I1007 12:31:02.643804 1244393 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-600773-m04 && echo "ha-600773-m04" | sudo tee /etc/hostname
	I1007 12:31:02.790694 1244393 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-600773-m04
	
	I1007 12:31:02.790842 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773-m04
	I1007 12:31:02.809670 1244393 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:02.809964 1244393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 34317 <nil> <nil>}
	I1007 12:31:02.809991 1244393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-600773-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-600773-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-600773-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:31:02.945946 1244393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:31:02.946015 1244393 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19763-1173066/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-1173066/.minikube}
	I1007 12:31:02.946048 1244393 ubuntu.go:177] setting up certificates
	I1007 12:31:02.946085 1244393 provision.go:84] configureAuth start
	I1007 12:31:02.946165 1244393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-600773-m04
	I1007 12:31:02.962457 1244393 provision.go:143] copyHostCerts
	I1007 12:31:02.962500 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-1173066/.minikube/cert.pem
	I1007 12:31:02.962532 1244393 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-1173066/.minikube/cert.pem, removing ...
	I1007 12:31:02.962538 1244393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-1173066/.minikube/cert.pem
	I1007 12:31:02.962620 1244393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-1173066/.minikube/cert.pem (1123 bytes)
	I1007 12:31:02.962703 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-1173066/.minikube/key.pem
	I1007 12:31:02.962719 1244393 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-1173066/.minikube/key.pem, removing ...
	I1007 12:31:02.962723 1244393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-1173066/.minikube/key.pem
	I1007 12:31:02.962748 1244393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-1173066/.minikube/key.pem (1675 bytes)
	I1007 12:31:02.962787 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.pem
	I1007 12:31:02.962802 1244393 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.pem, removing ...
	I1007 12:31:02.962808 1244393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.pem
	I1007 12:31:02.962833 1244393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.pem (1078 bytes)
	I1007 12:31:02.962877 1244393 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca-key.pem org=jenkins.ha-600773-m04 san=[127.0.0.1 192.168.58.5 ha-600773-m04 localhost minikube]
	I1007 12:31:03.836539 1244393 provision.go:177] copyRemoteCerts
	I1007 12:31:03.836608 1244393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:31:03.836656 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773-m04
	I1007 12:31:03.855094 1244393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34317 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/ha-600773-m04/id_rsa Username:docker}
	I1007 12:31:03.961056 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:31:03.961142 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:31:03.990532 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:31:03.990596 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1007 12:31:04.022690 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:31:04.022761 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:31:04.048453 1244393 provision.go:87] duration metric: took 1.102336641s to configureAuth
	I1007 12:31:04.048481 1244393 ubuntu.go:193] setting minikube options for container-runtime
	I1007 12:31:04.048722 1244393 config.go:182] Loaded profile config "ha-600773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:31:04.048839 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773-m04
	I1007 12:31:04.066046 1244393 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:04.066293 1244393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 34317 <nil> <nil>}
	I1007 12:31:04.066313 1244393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:31:04.338962 1244393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:31:04.338986 1244393 machine.go:96] duration metric: took 4.884076297s to provisionDockerMachine
	I1007 12:31:04.339000 1244393 start.go:293] postStartSetup for "ha-600773-m04" (driver="docker")
	I1007 12:31:04.339012 1244393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:31:04.339077 1244393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:31:04.339123 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773-m04
	I1007 12:31:04.358334 1244393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34317 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/ha-600773-m04/id_rsa Username:docker}
	I1007 12:31:04.463134 1244393 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:31:04.466440 1244393 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 12:31:04.466524 1244393 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 12:31:04.466544 1244393 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 12:31:04.466552 1244393 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 12:31:04.466566 1244393 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-1173066/.minikube/addons for local assets ...
	I1007 12:31:04.466641 1244393 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-1173066/.minikube/files for local assets ...
	I1007 12:31:04.466729 1244393 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-1173066/.minikube/files/etc/ssl/certs/11784622.pem -> 11784622.pem in /etc/ssl/certs
	I1007 12:31:04.466740 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/files/etc/ssl/certs/11784622.pem -> /etc/ssl/certs/11784622.pem
	I1007 12:31:04.466854 1244393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:31:04.475853 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/files/etc/ssl/certs/11784622.pem --> /etc/ssl/certs/11784622.pem (1708 bytes)
	I1007 12:31:04.502811 1244393 start.go:296] duration metric: took 163.794992ms for postStartSetup
	I1007 12:31:04.502941 1244393 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 12:31:04.503005 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773-m04
	I1007 12:31:04.520462 1244393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34317 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/ha-600773-m04/id_rsa Username:docker}
	I1007 12:31:04.613577 1244393 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 12:31:04.618229 1244393 fix.go:56] duration metric: took 5.553840594s for fixHost
	I1007 12:31:04.618255 1244393 start.go:83] releasing machines lock for "ha-600773-m04", held for 5.553912133s
	I1007 12:31:04.618336 1244393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-600773-m04
	I1007 12:31:04.640958 1244393 out.go:177] * Found network options:
	I1007 12:31:04.643407 1244393 out.go:177]   - NO_PROXY=192.168.58.2,192.168.58.3
	W1007 12:31:04.645865 1244393 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:31:04.645897 1244393 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:31:04.645927 1244393 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:31:04.645943 1244393 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:31:04.646014 1244393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:31:04.646099 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773-m04
	I1007 12:31:04.646405 1244393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:31:04.646469 1244393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773-m04
	I1007 12:31:04.671403 1244393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34317 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/ha-600773-m04/id_rsa Username:docker}
	I1007 12:31:04.671903 1244393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34317 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/ha-600773-m04/id_rsa Username:docker}
	I1007 12:31:04.927368 1244393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 12:31:04.932501 1244393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:31:04.941814 1244393 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1007 12:31:04.941980 1244393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:31:04.951706 1244393 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 12:31:04.951729 1244393 start.go:495] detecting cgroup driver to use...
	I1007 12:31:04.951790 1244393 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 12:31:04.951860 1244393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:31:04.964968 1244393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:31:04.980732 1244393 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:31:04.980826 1244393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:31:04.995220 1244393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:31:05.014270 1244393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:31:05.125351 1244393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:31:05.228938 1244393 docker.go:233] disabling docker service ...
	I1007 12:31:05.229023 1244393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:31:05.246379 1244393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:31:05.262070 1244393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:31:05.358525 1244393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:31:05.453097 1244393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:31:05.465743 1244393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:31:05.488415 1244393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:31:05.488497 1244393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:05.499204 1244393 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:31:05.499277 1244393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:05.511084 1244393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:05.521894 1244393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:05.532047 1244393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:31:05.541589 1244393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:05.552666 1244393 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:05.563066 1244393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:05.573177 1244393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:31:05.583009 1244393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:31:05.592235 1244393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:31:05.694006 1244393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:31:05.823342 1244393 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:31:05.823471 1244393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:31:05.827316 1244393 start.go:563] Will wait 60s for crictl version
	I1007 12:31:05.827422 1244393 ssh_runner.go:195] Run: which crictl
	I1007 12:31:05.830857 1244393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:31:05.872757 1244393 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1007 12:31:05.872891 1244393 ssh_runner.go:195] Run: crio --version
	I1007 12:31:05.924898 1244393 ssh_runner.go:195] Run: crio --version
	I1007 12:31:05.976356 1244393 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1007 12:31:05.978817 1244393 out.go:177]   - env NO_PROXY=192.168.58.2
	I1007 12:31:05.981369 1244393 out.go:177]   - env NO_PROXY=192.168.58.2,192.168.58.3
	I1007 12:31:05.983869 1244393 cli_runner.go:164] Run: docker network inspect ha-600773 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 12:31:05.999697 1244393 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1007 12:31:06.006143 1244393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:31:06.023137 1244393 mustload.go:65] Loading cluster: ha-600773
	I1007 12:31:06.023391 1244393 config.go:182] Loaded profile config "ha-600773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:31:06.023656 1244393 cli_runner.go:164] Run: docker container inspect ha-600773 --format={{.State.Status}}
	I1007 12:31:06.043514 1244393 host.go:66] Checking if "ha-600773" exists ...
	I1007 12:31:06.043880 1244393 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773 for IP: 192.168.58.5
	I1007 12:31:06.043895 1244393 certs.go:194] generating shared ca certs ...
	I1007 12:31:06.043910 1244393 certs.go:226] acquiring lock for ca certs: {Name:mk2f3e101c3a8a21aa5a00b0d7100cac880b0543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:06.044088 1244393 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.key
	I1007 12:31:06.044166 1244393 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.key
	I1007 12:31:06.044200 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:31:06.044224 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:31:06.044488 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:31:06.044532 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:31:06.044596 1244393 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/1178462.pem (1338 bytes)
	W1007 12:31:06.044636 1244393 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/1178462_empty.pem, impossibly tiny 0 bytes
	I1007 12:31:06.044650 1244393 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 12:31:06.044681 1244393 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/ca.pem (1078 bytes)
	I1007 12:31:06.044712 1244393 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:31:06.044734 1244393 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/key.pem (1675 bytes)
	I1007 12:31:06.044781 1244393 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1173066/.minikube/files/etc/ssl/certs/11784622.pem (1708 bytes)
	I1007 12:31:06.044813 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/1178462.pem -> /usr/share/ca-certificates/1178462.pem
	I1007 12:31:06.044895 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/files/etc/ssl/certs/11784622.pem -> /usr/share/ca-certificates/11784622.pem
	I1007 12:31:06.044919 1244393 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:31:06.044944 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:31:06.075516 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:31:06.102826 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:31:06.136481 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:31:06.162698 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/certs/1178462.pem --> /usr/share/ca-certificates/1178462.pem (1338 bytes)
	I1007 12:31:06.189050 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/files/etc/ssl/certs/11784622.pem --> /usr/share/ca-certificates/11784622.pem (1708 bytes)
	I1007 12:31:06.214830 1244393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:31:06.241586 1244393 ssh_runner.go:195] Run: openssl version
	I1007 12:31:06.247051 1244393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1178462.pem && ln -fs /usr/share/ca-certificates/1178462.pem /etc/ssl/certs/1178462.pem"
	I1007 12:31:06.256844 1244393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1178462.pem
	I1007 12:31:06.260311 1244393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:16 /usr/share/ca-certificates/1178462.pem
	I1007 12:31:06.260443 1244393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1178462.pem
	I1007 12:31:06.267322 1244393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1178462.pem /etc/ssl/certs/51391683.0"
	I1007 12:31:06.276125 1244393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11784622.pem && ln -fs /usr/share/ca-certificates/11784622.pem /etc/ssl/certs/11784622.pem"
	I1007 12:31:06.286201 1244393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11784622.pem
	I1007 12:31:06.289876 1244393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:16 /usr/share/ca-certificates/11784622.pem
	I1007 12:31:06.289935 1244393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11784622.pem
	I1007 12:31:06.297027 1244393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11784622.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:31:06.306502 1244393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:31:06.315668 1244393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:31:06.319174 1244393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:58 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:31:06.319237 1244393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:31:06.326572 1244393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:31:06.335893 1244393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:31:06.339282 1244393 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:31:06.339329 1244393 kubeadm.go:934] updating node {m04 192.168.58.5 0 v1.31.1  false true} ...
	I1007 12:31:06.339409 1244393 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-600773-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-600773 Namespace:default APIServerHAVIP:192.168.58.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:31:06.339477 1244393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:31:06.348450 1244393 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:31:06.348546 1244393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1007 12:31:06.357185 1244393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1007 12:31:06.375787 1244393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:31:06.395691 1244393 ssh_runner.go:195] Run: grep 192.168.58.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:31:06.399397 1244393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:31:06.409991 1244393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:31:06.495817 1244393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:31:06.509247 1244393 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.58.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1007 12:31:06.509648 1244393 config.go:182] Loaded profile config "ha-600773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:31:06.512811 1244393 out.go:177] * Verifying Kubernetes components...
	I1007 12:31:06.515436 1244393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:31:06.629571 1244393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:31:06.649997 1244393 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-1173066/kubeconfig
	I1007 12:31:06.650312 1244393 kapi.go:59] client config for ha-600773: &rest.Config{Host:"https://192.168.58.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/ha-600773/client.key", CAFile:"/home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e94a20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:31:06.650378 1244393 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.58.254:8443 with https://192.168.58.2:8443
	I1007 12:31:06.650633 1244393 node_ready.go:35] waiting up to 6m0s for node "ha-600773-m04" to be "Ready" ...
	I1007 12:31:06.650732 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m04
	I1007 12:31:06.650758 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:06.650775 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:06.650786 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:06.653819 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:07.151114 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m04
	I1007 12:31:07.151138 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:07.151148 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:07.151153 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:07.153987 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:07.650890 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m04
	I1007 12:31:07.650914 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:07.650924 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:07.650928 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:07.653767 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:08.150922 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m04
	I1007 12:31:08.150999 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:08.151015 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:08.151021 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:08.153871 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:08.651493 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m04
	I1007 12:31:08.651520 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:08.651530 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:08.651533 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:08.654306 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:08.655045 1244393 node_ready.go:53] node "ha-600773-m04" has status "Ready":"Unknown"
	I1007 12:31:09.150899 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m04
	I1007 12:31:09.150923 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:09.150933 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:09.150937 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:09.153683 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:09.651247 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m04
	I1007 12:31:09.651268 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:09.651277 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:09.651282 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:09.665569 1244393 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1007 12:31:10.151039 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m04
	I1007 12:31:10.151062 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:10.151072 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:10.151076 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:10.153963 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:10.651343 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m04
	I1007 12:31:10.651364 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:10.651374 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:10.651380 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:10.654428 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:10.655467 1244393 node_ready.go:53] node "ha-600773-m04" has status "Ready":"Unknown"
	I1007 12:31:11.151767 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m04
	I1007 12:31:11.151790 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:11.151799 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:11.151803 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:11.155248 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:11.651507 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m04
	I1007 12:31:11.651527 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:11.651537 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:11.651542 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:11.656616 1244393 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:31:12.151727 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m04
	I1007 12:31:12.151748 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:12.151757 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:12.151763 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:12.155154 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:12.650819 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m04
	I1007 12:31:12.650839 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:12.650848 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:12.650852 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:12.654117 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:13.150816 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m04
	I1007 12:31:13.150835 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:13.150844 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:13.150850 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:13.155876 1244393 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:31:13.156817 1244393 node_ready.go:53] node "ha-600773-m04" has status "Ready":"Unknown"
	I1007 12:31:13.651382 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m04
	I1007 12:31:13.651408 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:13.651418 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:13.651425 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:13.657434 1244393 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:31:13.658938 1244393 node_ready.go:49] node "ha-600773-m04" has status "Ready":"True"
	I1007 12:31:13.658962 1244393 node_ready.go:38] duration metric: took 7.008307859s for node "ha-600773-m04" to be "Ready" ...
	I1007 12:31:13.658973 1244393 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:31:13.659058 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1007 12:31:13.659065 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:13.659073 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:13.659076 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:13.669094 1244393 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:31:13.682081 1244393 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-blfnw" in "kube-system" namespace to be "Ready" ...
	I1007 12:31:13.682309 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:13.682360 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:13.682393 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:13.682440 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:13.689668 1244393 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:31:13.691068 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:13.691108 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:13.691118 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:13.691124 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:13.696129 1244393 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:31:14.182423 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:14.182449 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:14.182459 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:14.182466 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:14.186053 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:14.187184 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:14.187204 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:14.187214 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:14.187218 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:14.191135 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:14.682590 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:14.682614 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:14.682624 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:14.682629 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:14.686961 1244393 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:31:14.687812 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:14.687832 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:14.687842 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:14.687846 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:14.690435 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:15.183105 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:15.183131 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:15.183142 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:15.183147 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:15.186923 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:15.187746 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:15.187795 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:15.187832 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:15.187855 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:15.191100 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:15.683279 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:15.683300 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:15.683310 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:15.683313 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:15.687008 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:15.687877 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:15.687898 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:15.687908 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:15.687912 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:15.690366 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:15.690904 1244393 pod_ready.go:103] pod "coredns-7c65d6cfc9-blfnw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:31:16.182587 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:16.182611 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:16.182620 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:16.182624 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:16.185782 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:16.186633 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:16.186653 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:16.186662 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:16.186693 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:16.189534 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:16.682977 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:16.683038 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:16.683072 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:16.683091 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:16.686413 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:16.687558 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:16.687630 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:16.687655 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:16.687679 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:16.693145 1244393 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:31:17.182669 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:17.182692 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:17.182701 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:17.182707 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:17.185770 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:17.186501 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:17.186519 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:17.186530 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:17.186534 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:17.189297 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:17.682425 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:17.682451 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:17.682461 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:17.682466 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:17.685590 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:17.686358 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:17.686376 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:17.686385 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:17.686390 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:17.689100 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:18.182862 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:18.182887 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:18.182905 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:18.182911 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:18.185987 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:18.186933 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:18.186951 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:18.186961 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:18.186985 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:18.189452 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:18.190012 1244393 pod_ready.go:103] pod "coredns-7c65d6cfc9-blfnw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:31:18.682512 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:18.682581 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:18.682603 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:18.682625 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:18.686207 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:18.686949 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:18.686968 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:18.686976 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:18.686980 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:18.689617 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:19.182454 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:19.182474 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:19.182484 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:19.182488 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:19.185374 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:19.186320 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:19.186336 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:19.186346 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:19.186351 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:19.188948 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:19.683132 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:19.683160 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:19.683170 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:19.683174 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:19.685964 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:19.686917 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:19.686934 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:19.686945 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:19.686948 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:19.689417 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:20.182929 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:20.182952 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:20.182962 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:20.182966 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:20.185978 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:20.186853 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:20.186874 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:20.186884 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:20.186889 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:20.189370 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:20.682632 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:20.682654 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:20.682663 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:20.682667 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:20.685378 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:20.686226 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:20.686252 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:20.686259 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:20.686264 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:20.688569 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:20.689327 1244393 pod_ready.go:103] pod "coredns-7c65d6cfc9-blfnw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:31:21.182502 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:21.182525 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:21.182535 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:21.182540 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:21.186192 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:21.187428 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:21.187481 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:21.187504 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:21.187525 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:21.195265 1244393 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:31:21.682452 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:21.682475 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:21.682490 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:21.682494 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:21.690931 1244393 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:31:21.691701 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:21.691720 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:21.691728 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:21.691733 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:21.695189 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:22.182616 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:22.182649 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:22.182659 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:22.182663 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:22.185443 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:22.186205 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:22.186222 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:22.186231 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:22.186238 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:22.188705 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:22.682484 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:22.682508 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:22.682518 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:22.682522 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:22.685274 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:22.686140 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:22.686161 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:22.686170 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:22.686174 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:22.688575 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:23.183296 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:23.183327 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:23.183339 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:23.183344 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:23.186131 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:23.186781 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:23.186801 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:23.186811 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:23.186815 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:23.189518 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:23.190573 1244393 pod_ready.go:103] pod "coredns-7c65d6cfc9-blfnw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:31:23.682752 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:23.682777 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:23.682786 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:23.682790 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:23.685610 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:23.686334 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:23.686350 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:23.686360 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:23.686364 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:23.688743 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:24.182917 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:24.182939 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:24.182949 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:24.182954 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:24.185909 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:24.186957 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:24.186988 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:24.186999 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:24.187004 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:24.189539 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:24.682776 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:24.682800 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:24.682810 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:24.682814 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:24.685648 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:24.686454 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:24.686472 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:24.686482 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:24.686486 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:24.688959 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:25.182981 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:25.183004 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:25.183013 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:25.183019 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:25.185930 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:25.186691 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:25.186702 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:25.186711 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:25.186716 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:25.189206 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:25.682361 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:25.682385 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:25.682395 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:25.682400 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:25.685224 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:25.686130 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:25.686149 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:25.686158 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:25.686164 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:25.688582 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:25.689200 1244393 pod_ready.go:103] pod "coredns-7c65d6cfc9-blfnw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:31:26.182707 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:26.182752 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:26.182793 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:26.182844 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:26.185738 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:26.186704 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:26.186721 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:26.186732 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:26.186737 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:26.189242 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:26.682625 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:26.682651 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:26.682660 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:26.682665 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:26.685967 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:26.687026 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:26.687049 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:26.687059 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:26.687063 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:26.689849 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:27.182751 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:27.182775 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:27.182785 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:27.182790 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:27.185962 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:27.186923 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:27.186940 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:27.186950 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:27.186955 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:27.189525 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:27.682577 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:27.682598 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:27.682608 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:27.682614 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:27.687411 1244393 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:31:27.688203 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:27.688226 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:27.688235 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:27.688239 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:27.692309 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:27.692969 1244393 pod_ready.go:103] pod "coredns-7c65d6cfc9-blfnw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:31:28.182431 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:28.182453 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:28.182462 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:28.182466 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:28.185390 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:28.186225 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:28.186254 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:28.186265 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:28.186271 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:28.188786 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:28.683154 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:28.683177 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:28.683187 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:28.683191 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:28.696191 1244393 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1007 12:31:28.698603 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:28.698628 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:28.698639 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:28.698644 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:28.711722 1244393 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1007 12:31:29.182814 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:29.182837 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:29.182847 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:29.182852 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:29.186740 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:29.187955 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:29.187977 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:29.187987 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:29.187994 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:29.190708 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:29.683035 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:29.683054 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:29.683063 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:29.683069 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:29.692740 1244393 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:31:29.693663 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:29.693682 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:29.693694 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:29.693698 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:29.697699 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:29.698183 1244393 pod_ready.go:103] pod "coredns-7c65d6cfc9-blfnw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:31:30.182518 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:30.182548 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:30.182558 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:30.182567 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:30.185871 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:30.186896 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:30.186919 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:30.186929 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:30.186935 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:30.190084 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:30.682956 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:30.682981 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:30.682991 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:30.682995 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:30.685775 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:30.686618 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:30.686636 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:30.686646 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:30.686660 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:30.689047 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:31.183303 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:31.183327 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:31.183337 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:31.183342 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:31.186225 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:31.187138 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:31.187157 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:31.187166 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:31.187170 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:31.189728 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:31.683235 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:31.683261 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:31.683270 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:31.683277 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:31.686411 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:31.687556 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:31.687580 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:31.687590 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:31.687596 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:31.691272 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:32.183196 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:32.183220 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:32.183229 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:32.183234 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:32.186117 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:32.186894 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:32.186915 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:32.186924 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:32.186929 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:32.189546 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:32.190037 1244393 pod_ready.go:103] pod "coredns-7c65d6cfc9-blfnw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:31:32.682856 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:32.682891 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:32.682902 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:32.682907 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:32.685956 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:32.686647 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:32.686664 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:32.686673 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:32.686678 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:32.689259 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:33.182387 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:33.182409 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:33.182420 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:33.182425 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:33.185481 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:33.186169 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:33.186186 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:33.186197 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:33.186202 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:33.188633 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:33.682378 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:33.682402 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:33.682413 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:33.682417 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:33.685272 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:33.685979 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:33.686001 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:33.686010 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:33.686015 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:33.688604 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:34.182633 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:34.182666 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:34.182676 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:34.182681 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:34.185884 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:34.186810 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:34.186831 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:34.186839 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:34.186843 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:34.189254 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:34.682744 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:34.682768 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:34.682777 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:34.682781 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:34.685590 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:34.686267 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:34.686278 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:34.686286 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:34.686290 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:34.688828 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:34.689321 1244393 pod_ready.go:103] pod "coredns-7c65d6cfc9-blfnw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:31:35.183130 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:35.183156 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:35.183165 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:35.183178 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:35.186416 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:35.187233 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:35.187248 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:35.187257 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:35.187262 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:35.189825 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:35.683171 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-blfnw
	I1007 12:31:35.683196 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:35.683206 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:35.683212 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:35.693297 1244393 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1007 12:31:35.694117 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:35.694133 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:35.694143 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:35.694149 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:35.702566 1244393 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:31:35.703373 1244393 pod_ready.go:98] node "ha-600773" hosting pod "coredns-7c65d6cfc9-blfnw" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-600773" has status "Ready":"Unknown"
	I1007 12:31:35.703401 1244393 pod_ready.go:82] duration metric: took 22.02120954s for pod "coredns-7c65d6cfc9-blfnw" in "kube-system" namespace to be "Ready" ...
	E1007 12:31:35.703430 1244393 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-600773" hosting pod "coredns-7c65d6cfc9-blfnw" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-600773" has status "Ready":"Unknown"
	I1007 12:31:35.703445 1244393 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jrczl" in "kube-system" namespace to be "Ready" ...
	I1007 12:31:35.703542 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jrczl
	I1007 12:31:35.703560 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:35.703569 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:35.703572 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:35.708991 1244393 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:31:35.712025 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:35.712049 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:35.712058 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:35.712065 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:35.720860 1244393 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:31:35.721660 1244393 pod_ready.go:98] node "ha-600773" hosting pod "coredns-7c65d6cfc9-jrczl" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-600773" has status "Ready":"Unknown"
	I1007 12:31:35.721684 1244393 pod_ready.go:82] duration metric: took 18.232074ms for pod "coredns-7c65d6cfc9-jrczl" in "kube-system" namespace to be "Ready" ...
	E1007 12:31:35.721696 1244393 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-600773" hosting pod "coredns-7c65d6cfc9-jrczl" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-600773" has status "Ready":"Unknown"
	I1007 12:31:35.721735 1244393 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-600773" in "kube-system" namespace to be "Ready" ...
	I1007 12:31:35.721820 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-600773
	I1007 12:31:35.721833 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:35.721842 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:35.721863 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:35.730805 1244393 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:31:35.731879 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:35.731940 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:35.731963 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:35.731987 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:35.741995 1244393 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:31:35.742583 1244393 pod_ready.go:98] node "ha-600773" hosting pod "etcd-ha-600773" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-600773" has status "Ready":"Unknown"
	I1007 12:31:35.742639 1244393 pod_ready.go:82] duration metric: took 20.895503ms for pod "etcd-ha-600773" in "kube-system" namespace to be "Ready" ...
	E1007 12:31:35.742665 1244393 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-600773" hosting pod "etcd-ha-600773" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-600773" has status "Ready":"Unknown"
	I1007 12:31:35.742702 1244393 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-600773-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:31:35.742819 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-600773-m02
	I1007 12:31:35.742848 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:35.742876 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:35.742901 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:35.746943 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:35.748013 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m02
	I1007 12:31:35.748066 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:35.748091 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:35.748116 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:35.764160 1244393 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1007 12:31:35.764836 1244393 pod_ready.go:93] pod "etcd-ha-600773-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:31:35.764893 1244393 pod_ready.go:82] duration metric: took 22.162947ms for pod "etcd-ha-600773-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:31:35.764929 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-600773" in "kube-system" namespace to be "Ready" ...
	I1007 12:31:35.765028 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-600773
	I1007 12:31:35.765054 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:35.765077 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:35.765103 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:35.772486 1244393 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:31:35.773699 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:35.773754 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:35.773777 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:35.773802 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:35.780576 1244393 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:31:35.781581 1244393 pod_ready.go:98] node "ha-600773" hosting pod "kube-apiserver-ha-600773" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-600773" has status "Ready":"Unknown"
	I1007 12:31:35.781639 1244393 pod_ready.go:82] duration metric: took 16.676641ms for pod "kube-apiserver-ha-600773" in "kube-system" namespace to be "Ready" ...
	E1007 12:31:35.781664 1244393 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-600773" hosting pod "kube-apiserver-ha-600773" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-600773" has status "Ready":"Unknown"
	I1007 12:31:35.781686 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-600773-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:31:35.884031 1244393 request.go:632] Waited for 102.238239ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-600773-m02
	I1007 12:31:35.884131 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-600773-m02
	I1007 12:31:35.884143 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:35.884152 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:35.884157 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:35.887128 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:36.084207 1244393 request.go:632] Waited for 196.343794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773-m02
	I1007 12:31:36.084354 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m02
	I1007 12:31:36.084367 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:36.084376 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:36.084381 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:36.096839 1244393 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1007 12:31:36.097948 1244393 pod_ready.go:93] pod "kube-apiserver-ha-600773-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:31:36.098026 1244393 pod_ready.go:82] duration metric: took 316.29906ms for pod "kube-apiserver-ha-600773-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:31:36.098055 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-600773" in "kube-system" namespace to be "Ready" ...
	I1007 12:31:36.283934 1244393 request.go:632] Waited for 185.736692ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-600773
	I1007 12:31:36.284022 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-600773
	I1007 12:31:36.284035 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:36.284047 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:36.284051 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:36.289713 1244393 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:31:36.483549 1244393 request.go:632] Waited for 192.328893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:36.483629 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:36.483635 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:36.483643 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:36.483647 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:36.486573 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:36.487327 1244393 pod_ready.go:98] node "ha-600773" hosting pod "kube-controller-manager-ha-600773" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-600773" has status "Ready":"Unknown"
	I1007 12:31:36.487354 1244393 pod_ready.go:82] duration metric: took 389.24627ms for pod "kube-controller-manager-ha-600773" in "kube-system" namespace to be "Ready" ...
	E1007 12:31:36.487366 1244393 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-600773" hosting pod "kube-controller-manager-ha-600773" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-600773" has status "Ready":"Unknown"
	I1007 12:31:36.487375 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-600773-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:31:36.683778 1244393 request.go:632] Waited for 196.329583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-600773-m02
	I1007 12:31:36.683838 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-600773-m02
	I1007 12:31:36.683846 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:36.683861 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:36.683867 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:36.692527 1244393 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:31:36.883520 1244393 request.go:632] Waited for 189.368982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773-m02
	I1007 12:31:36.883626 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m02
	I1007 12:31:36.883643 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:36.883652 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:36.883656 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:36.886295 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:36.886955 1244393 pod_ready.go:93] pod "kube-controller-manager-ha-600773-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:31:36.886976 1244393 pod_ready.go:82] duration metric: took 399.589088ms for pod "kube-controller-manager-ha-600773-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:31:36.886988 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gnxd8" in "kube-system" namespace to be "Ready" ...
	I1007 12:31:37.083967 1244393 request.go:632] Waited for 196.892967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnxd8
	I1007 12:31:37.084074 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnxd8
	I1007 12:31:37.084086 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:37.084096 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:37.084119 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:37.086914 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:37.283933 1244393 request.go:632] Waited for 196.335688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773-m04
	I1007 12:31:37.283991 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m04
	I1007 12:31:37.284038 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:37.284062 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:37.284073 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:37.287059 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:37.287733 1244393 pod_ready.go:93] pod "kube-proxy-gnxd8" in "kube-system" namespace has status "Ready":"True"
	I1007 12:31:37.287759 1244393 pod_ready.go:82] duration metric: took 400.740185ms for pod "kube-proxy-gnxd8" in "kube-system" namespace to be "Ready" ...
	I1007 12:31:37.287778 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rvn82" in "kube-system" namespace to be "Ready" ...
	I1007 12:31:37.483215 1244393 request.go:632] Waited for 195.272205ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rvn82
	I1007 12:31:37.483285 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rvn82
	I1007 12:31:37.483322 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:37.483338 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:37.483343 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:37.486935 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:37.684086 1244393 request.go:632] Waited for 196.149286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:37.684167 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:37.684178 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:37.684223 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:37.684235 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:37.687249 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:37.687816 1244393 pod_ready.go:98] node "ha-600773" hosting pod "kube-proxy-rvn82" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-600773" has status "Ready":"Unknown"
	I1007 12:31:37.687838 1244393 pod_ready.go:82] duration metric: took 400.048719ms for pod "kube-proxy-rvn82" in "kube-system" namespace to be "Ready" ...
	E1007 12:31:37.687850 1244393 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-600773" hosting pod "kube-proxy-rvn82" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-600773" has status "Ready":"Unknown"
	I1007 12:31:37.687857 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vf8ng" in "kube-system" namespace to be "Ready" ...
	I1007 12:31:37.883212 1244393 request.go:632] Waited for 195.279319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vf8ng
	I1007 12:31:37.883309 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vf8ng
	I1007 12:31:37.883315 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:37.883324 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:37.883333 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:37.886302 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:38.083301 1244393 request.go:632] Waited for 196.221375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773-m02
	I1007 12:31:38.083391 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m02
	I1007 12:31:38.083401 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:38.083411 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:38.083434 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:38.086347 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:38.087394 1244393 pod_ready.go:93] pod "kube-proxy-vf8ng" in "kube-system" namespace has status "Ready":"True"
	I1007 12:31:38.087418 1244393 pod_ready.go:82] duration metric: took 399.552436ms for pod "kube-proxy-vf8ng" in "kube-system" namespace to be "Ready" ...
	I1007 12:31:38.087447 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-600773" in "kube-system" namespace to be "Ready" ...
	I1007 12:31:38.283455 1244393 request.go:632] Waited for 195.919454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-600773
	I1007 12:31:38.283557 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-600773
	I1007 12:31:38.283596 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:38.283623 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:38.283643 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:38.287678 1244393 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:31:38.483569 1244393 request.go:632] Waited for 195.333727ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:38.483673 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773
	I1007 12:31:38.483689 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:38.483698 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:38.483703 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:38.486459 1244393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:31:38.487317 1244393 pod_ready.go:98] node "ha-600773" hosting pod "kube-scheduler-ha-600773" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-600773" has status "Ready":"Unknown"
	I1007 12:31:38.487342 1244393 pod_ready.go:82] duration metric: took 399.881886ms for pod "kube-scheduler-ha-600773" in "kube-system" namespace to be "Ready" ...
	E1007 12:31:38.487354 1244393 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-600773" hosting pod "kube-scheduler-ha-600773" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-600773" has status "Ready":"Unknown"
	I1007 12:31:38.487362 1244393 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-600773-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:31:38.683319 1244393 request.go:632] Waited for 195.860263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-600773-m02
	I1007 12:31:38.683439 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-600773-m02
	I1007 12:31:38.683470 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:38.683499 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:38.683521 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:38.687674 1244393 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:31:38.883909 1244393 request.go:632] Waited for 195.325785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-600773-m02
	I1007 12:31:38.883973 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-600773-m02
	I1007 12:31:38.883984 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:38.883994 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:38.883998 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:38.887151 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:38.888105 1244393 pod_ready.go:93] pod "kube-scheduler-ha-600773-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:31:38.888131 1244393 pod_ready.go:82] duration metric: took 400.760115ms for pod "kube-scheduler-ha-600773-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:31:38.888190 1244393 pod_ready.go:39] duration metric: took 25.229196958s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:31:38.888217 1244393 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:31:38.888367 1244393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:31:38.901287 1244393 system_svc.go:56] duration metric: took 13.061384ms WaitForService to wait for kubelet
	I1007 12:31:38.901318 1244393 kubeadm.go:582] duration metric: took 32.392021774s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:31:38.901341 1244393 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:31:39.083769 1244393 request.go:632] Waited for 182.330017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1007 12:31:39.083829 1244393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1007 12:31:39.083835 1244393 round_trippers.go:469] Request Headers:
	I1007 12:31:39.083851 1244393 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:31:39.083858 1244393 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 12:31:39.087779 1244393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:31:39.089240 1244393 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 12:31:39.089277 1244393 node_conditions.go:123] node cpu capacity is 2
	I1007 12:31:39.089289 1244393 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 12:31:39.089294 1244393 node_conditions.go:123] node cpu capacity is 2
	I1007 12:31:39.089298 1244393 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 12:31:39.089303 1244393 node_conditions.go:123] node cpu capacity is 2
	I1007 12:31:39.089309 1244393 node_conditions.go:105] duration metric: took 187.961127ms to run NodePressure ...
	I1007 12:31:39.089322 1244393 start.go:241] waiting for startup goroutines ...
	I1007 12:31:39.089348 1244393 start.go:255] writing updated cluster config ...
	I1007 12:31:39.089713 1244393 ssh_runner.go:195] Run: rm -f paused
	I1007 12:31:39.170275 1244393 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 12:31:39.173192 1244393 out.go:177] * Done! kubectl is now configured to use "ha-600773" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 12:30:55 ha-600773 crio[643]: time="2024-10-07 12:30:55.964841366Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ef8a0db92fd728d6fc7d86691959a1d8c5fb45389812689017418e168441a78b/merged/etc/group: no such file or directory"
	Oct 07 12:30:56 ha-600773 crio[643]: time="2024-10-07 12:30:56.021871613Z" level=info msg="Created container 231c33d3a3d988b80df88bec074592f2397aae63554feaf19ad03f4092ff9d44: kube-system/storage-provisioner/storage-provisioner" id=a8b5aede-071a-4d6a-87ab-10b017a5dd34 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 07 12:30:56 ha-600773 crio[643]: time="2024-10-07 12:30:56.022480757Z" level=info msg="Starting container: 231c33d3a3d988b80df88bec074592f2397aae63554feaf19ad03f4092ff9d44" id=2c76d7fe-6e20-4466-9c21-441dbea80583 name=/runtime.v1.RuntimeService/StartContainer
	Oct 07 12:30:56 ha-600773 crio[643]: time="2024-10-07 12:30:56.032076051Z" level=info msg="Started container" PID=1840 containerID=231c33d3a3d988b80df88bec074592f2397aae63554feaf19ad03f4092ff9d44 description=kube-system/storage-provisioner/storage-provisioner id=2c76d7fe-6e20-4466-9c21-441dbea80583 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a7a249b9443452a58232e05c880edb1788e95fac0cf164b1be786aee2279018e
	Oct 07 12:31:05 ha-600773 crio[643]: time="2024-10-07 12:31:05.925667270Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Oct 07 12:31:05 ha-600773 crio[643]: time="2024-10-07 12:31:05.931881037Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 07 12:31:05 ha-600773 crio[643]: time="2024-10-07 12:31:05.931919798Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 07 12:31:05 ha-600773 crio[643]: time="2024-10-07 12:31:05.931945570Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Oct 07 12:31:05 ha-600773 crio[643]: time="2024-10-07 12:31:05.935798084Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 07 12:31:05 ha-600773 crio[643]: time="2024-10-07 12:31:05.935830396Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 07 12:31:05 ha-600773 crio[643]: time="2024-10-07 12:31:05.935844590Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Oct 07 12:31:05 ha-600773 crio[643]: time="2024-10-07 12:31:05.939391491Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 07 12:31:05 ha-600773 crio[643]: time="2024-10-07 12:31:05.939432048Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 07 12:31:05 ha-600773 crio[643]: time="2024-10-07 12:31:05.939453595Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Oct 07 12:31:05 ha-600773 crio[643]: time="2024-10-07 12:31:05.943131522Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 07 12:31:05 ha-600773 crio[643]: time="2024-10-07 12:31:05.943167304Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 07 12:31:10 ha-600773 crio[643]: time="2024-10-07 12:31:10.715188708Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.1" id=d44cbecb-fdbd-41d1-b8a7-0678d285d545 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 12:31:10 ha-600773 crio[643]: time="2024-10-07 12:31:10.715414881Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1 registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=d44cbecb-fdbd-41d1-b8a7-0678d285d545 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 12:31:10 ha-600773 crio[643]: time="2024-10-07 12:31:10.716125053Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.1" id=59d6a751-1851-4012-bcfa-0f634e9f05bd name=/runtime.v1.ImageService/ImageStatus
	Oct 07 12:31:10 ha-600773 crio[643]: time="2024-10-07 12:31:10.716419246Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1 registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=59d6a751-1851-4012-bcfa-0f634e9f05bd name=/runtime.v1.ImageService/ImageStatus
	Oct 07 12:31:10 ha-600773 crio[643]: time="2024-10-07 12:31:10.717219953Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-600773/kube-controller-manager" id=5095e8b7-17b0-4fc2-8f2c-42c970a0f23a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 07 12:31:10 ha-600773 crio[643]: time="2024-10-07 12:31:10.717314164Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 07 12:31:10 ha-600773 crio[643]: time="2024-10-07 12:31:10.805176876Z" level=info msg="Created container 18e5cc8b86e49957aea0a7f6e92f3a8b7054a73d52a090fe0b37feb966220fa4: kube-system/kube-controller-manager-ha-600773/kube-controller-manager" id=5095e8b7-17b0-4fc2-8f2c-42c970a0f23a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 07 12:31:10 ha-600773 crio[643]: time="2024-10-07 12:31:10.805844842Z" level=info msg="Starting container: 18e5cc8b86e49957aea0a7f6e92f3a8b7054a73d52a090fe0b37feb966220fa4" id=ec0a065e-9bb9-4b62-839c-88b7d16b6235 name=/runtime.v1.RuntimeService/StartContainer
	Oct 07 12:31:10 ha-600773 crio[643]: time="2024-10-07 12:31:10.816323132Z" level=info msg="Started container" PID=1919 containerID=18e5cc8b86e49957aea0a7f6e92f3a8b7054a73d52a090fe0b37feb966220fa4 description=kube-system/kube-controller-manager-ha-600773/kube-controller-manager id=ec0a065e-9bb9-4b62-839c-88b7d16b6235 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a03035e5cdd401d2ae6b2a0b135de52061a38dcbd9b460d8dced3d180087a0b0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	18e5cc8b86e49       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   30 seconds ago       Running             kube-controller-manager   8                   a03035e5cdd40       kube-controller-manager-ha-600773
	231c33d3a3d98       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   45 seconds ago       Running             storage-provisioner       4                   a7a249b944345       storage-provisioner
	2910e34a9d43f       4eadde00b6c50b581474eaa28b09bfcdd974ccaab8bafac22b08e7d2ecd66fc1   49 seconds ago       Running             kube-vip                  3                   94756fa4af090       kube-vip-ha-600773
	a9fad3c41c8e3       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   54 seconds ago       Running             kube-apiserver            4                   f1bda19949d1a       kube-apiserver-ha-600773
	9587c90763cb1       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   About a minute ago   Running             coredns                   2                   223ac06aade45       coredns-7c65d6cfc9-jrczl
	3aeb0568c0d5f       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   About a minute ago   Running             busybox                   2                   33174813b1252       busybox-7dff88458-jdnkg
	0ea5b1d78d34a       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d   About a minute ago   Running             kube-proxy                2                   c6338b010ce16       kube-proxy-rvn82
	13da0f0288f5c       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   About a minute ago   Running             coredns                   2                   4e954f238e227       coredns-7c65d6cfc9-blfnw
	d10bfc28c0e3f       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51   About a minute ago   Running             kindnet-cni               2                   88eca43d8f9de       kindnet-4zd8h
	8fda388923e58       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Exited              storage-provisioner       3                   a7a249b944345       storage-provisioner
	65c86b8a73c1b       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   About a minute ago   Exited              kube-controller-manager   7                   a03035e5cdd40       kube-controller-manager-ha-600773
	09b5383c8dbc4       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da   2 minutes ago        Running             etcd                      2                   dcb98ff5b38d7       etcd-ha-600773
	2d2a683539a03       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d   2 minutes ago        Running             kube-scheduler            2                   aaf10e14af501       kube-scheduler-ha-600773
	0f3b503b6214d       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   2 minutes ago        Exited              kube-apiserver            3                   f1bda19949d1a       kube-apiserver-ha-600773
	dd8d13a933710       4eadde00b6c50b581474eaa28b09bfcdd974ccaab8bafac22b08e7d2ecd66fc1   2 minutes ago        Exited              kube-vip                  2                   94756fa4af090       kube-vip-ha-600773
	
	
	==> coredns [13da0f0288f5c80553f1699a1ebc56c060785b13cc65fcd5ca74f0c6048c683f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 75e5db48a73272e2c90919c8256e5cca0293ae0ed689e2ed44f1254a9589c3d004cb3e693d059116718c47e9305987b828b11b2735a1cefa59e4a9489dda5cee
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56377 - 60789 "HINFO IN 5643457720572439768.1878249834129296586. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.048573443s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1106477132]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 12:30:25.624) (total time: 30002ms):
	Trace[1106477132]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:30:55.626)
	Trace[1106477132]: [30.002507006s] [30.002507006s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1386528669]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 12:30:25.625) (total time: 30002ms):
	Trace[1386528669]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:30:55.627)
	Trace[1386528669]: [30.0023309s] [30.0023309s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1205445703]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 12:30:25.625) (total time: 30002ms):
	Trace[1205445703]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:30:55.627)
	Trace[1205445703]: [30.00276974s] [30.00276974s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [9587c90763cb1a8b4f0d3db186010b794d88912b8a300dc490f5d1eb8f86d69f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 75e5db48a73272e2c90919c8256e5cca0293ae0ed689e2ed44f1254a9589c3d004cb3e693d059116718c47e9305987b828b11b2735a1cefa59e4a9489dda5cee
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51990 - 58204 "HINFO IN 2263138293363518376.2055117964042293138. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00427898s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1947972734]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 12:30:25.691) (total time: 30002ms):
	Trace[1947972734]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (12:30:55.693)
	Trace[1947972734]: [30.002328812s] [30.002328812s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1527141723]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 12:30:25.692) (total time: 30000ms):
	Trace[1527141723]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:30:55.693)
	Trace[1527141723]: [30.000842243s] [30.000842243s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[591631054]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 12:30:25.693) (total time: 30000ms):
	Trace[591631054]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:30:55.694)
	Trace[591631054]: [30.000945439s] [30.000945439s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-600773
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-600773
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-600773
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T12_21_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:20:58 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-600773
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:31:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:31:41 +0000   Mon, 07 Oct 2024 12:31:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:31:41 +0000   Mon, 07 Oct 2024 12:31:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:31:41 +0000   Mon, 07 Oct 2024 12:31:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:31:41 +0000   Mon, 07 Oct 2024 12:31:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    ha-600773
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 08108c43589d45c7ae93f07f9aad7595
	  System UUID:                50546e3e-c2cd-45b2-b182-76a76fe86ccf
	  Boot ID:                    9a8fefe6-3962-4cb9-809a-2b740ac8992f
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jdnkg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 coredns-7c65d6cfc9-blfnw             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 coredns-7c65d6cfc9-jrczl             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 etcd-ha-600773                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-4zd8h                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-600773             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-600773    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-rvn82                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-600773             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-600773                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m45s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 75s                    kube-proxy       
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node ha-600773 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node ha-600773 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node ha-600773 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node ha-600773 event: Registered Node ha-600773 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-600773 event: Registered Node ha-600773 in Controller
	  Normal   NodeReady                9m56s                  kubelet          Node ha-600773 status is now: NodeReady
	  Normal   RegisteredNode           8m59s                  node-controller  Node ha-600773 event: Registered Node ha-600773 in Controller
	  Normal   NodeHasSufficientPID     5m51s (x7 over 5m51s)  kubelet          Node ha-600773 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  5m51s (x8 over 5m51s)  kubelet          Node ha-600773 status is now: NodeHasSufficientMemory
	  Normal   Starting                 5m51s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m51s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasNoDiskPressure    5m51s (x8 over 5m51s)  kubelet          Node ha-600773 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           5m10s                  node-controller  Node ha-600773 event: Registered Node ha-600773 in Controller
	  Normal   RegisteredNode           4m16s                  node-controller  Node ha-600773 event: Registered Node ha-600773 in Controller
	  Normal   RegisteredNode           3m55s                  node-controller  Node ha-600773 event: Registered Node ha-600773 in Controller
	  Normal   NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node ha-600773 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m11s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m11s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node ha-600773 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m11s (x7 over 2m11s)  kubelet          Node ha-600773 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           92s                    node-controller  Node ha-600773 event: Registered Node ha-600773 in Controller
	  Normal   RegisteredNode           27s                    node-controller  Node ha-600773 event: Registered Node ha-600773 in Controller
	  Normal   NodeNotReady             7s                     node-controller  Node ha-600773 status is now: NodeNotReady
	
	
	Name:               ha-600773-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-600773-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-600773
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_21_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:21:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-600773-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:31:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:30:03 +0000   Mon, 07 Oct 2024 12:21:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:30:03 +0000   Mon, 07 Oct 2024 12:21:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:30:03 +0000   Mon, 07 Oct 2024 12:21:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:30:03 +0000   Mon, 07 Oct 2024 12:22:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    ha-600773-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 50543b6747a74bbbb282a13b93950df2
	  System UUID:                cdf30f13-d29d-408c-ae30-f27c05b8b4ef
	  Boot ID:                    9a8fefe6-3962-4cb9-809a-2b740ac8992f
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4k82z                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 etcd-ha-600773-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-cqxld                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-600773-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-600773-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-vf8ng                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-600773-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-600773-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 6m34s                  kube-proxy       
	  Normal   Starting                 4m53s                  kube-proxy       
	  Normal   Starting                 83s                    kube-proxy       
	  Normal   RegisteredNode           10m                    node-controller  Node ha-600773-m02 event: Registered Node ha-600773-m02 in Controller
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-600773-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-600773-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-600773-m02 status is now: NodeHasNoDiskPressure
	  Normal   CIDRAssignmentFailed     10m                    cidrAllocator    Node ha-600773-m02 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           10m                    node-controller  Node ha-600773-m02 event: Registered Node ha-600773-m02 in Controller
	  Normal   RegisteredNode           8m59s                  node-controller  Node ha-600773-m02 event: Registered Node ha-600773-m02 in Controller
	  Warning  CgroupV1                 6m59s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 6m59s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m59s (x8 over 6m59s)  kubelet          Node ha-600773-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m59s (x8 over 6m59s)  kubelet          Node ha-600773-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m59s (x7 over 6m59s)  kubelet          Node ha-600773-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m50s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m50s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     5m50s (x7 over 5m50s)  kubelet          Node ha-600773-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  5m49s (x8 over 5m50s)  kubelet          Node ha-600773-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m49s (x8 over 5m50s)  kubelet          Node ha-600773-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           5m10s                  node-controller  Node ha-600773-m02 event: Registered Node ha-600773-m02 in Controller
	  Normal   RegisteredNode           4m16s                  node-controller  Node ha-600773-m02 event: Registered Node ha-600773-m02 in Controller
	  Normal   RegisteredNode           3m55s                  node-controller  Node ha-600773-m02 event: Registered Node ha-600773-m02 in Controller
	  Normal   Starting                 2m9s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m9s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  2m9s (x8 over 2m9s)    kubelet          Node ha-600773-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m9s (x8 over 2m9s)    kubelet          Node ha-600773-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m9s (x7 over 2m9s)    kubelet          Node ha-600773-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           92s                    node-controller  Node ha-600773-m02 event: Registered Node ha-600773-m02 in Controller
	  Normal   RegisteredNode           27s                    node-controller  Node ha-600773-m02 event: Registered Node ha-600773-m02 in Controller
	
	
	Name:               ha-600773-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-600773-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-600773
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_23_49_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:23:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-600773-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:31:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:31:13 +0000   Mon, 07 Oct 2024 12:31:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:31:13 +0000   Mon, 07 Oct 2024 12:31:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:31:13 +0000   Mon, 07 Oct 2024 12:31:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:31:13 +0000   Mon, 07 Oct 2024 12:31:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.5
	  Hostname:    ha-600773-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 aca0c40b97a441ae9e60f07b5778c5fd
	  System UUID:                b82b0956-ffa1-45ce-8aae-11ad94ea93ff
	  Boot ID:                    9a8fefe6-3962-4cb9-809a-2b740ac8992f
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xnnn9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	  kube-system                 kindnet-xtjsq              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m54s
	  kube-system                 kube-proxy-gnxd8           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m8s                   kube-proxy       
	  Normal   Starting                 14s                    kube-proxy       
	  Normal   Starting                 7m51s                  kube-proxy       
	  Normal   CIDRAssignmentFailed     7m54s                  cidrAllocator    Node ha-600773-m04 status is now: CIDRAssignmentFailed
	  Normal   NodeHasNoDiskPressure    7m54s (x2 over 7m54s)  kubelet          Node ha-600773-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m54s (x2 over 7m54s)  kubelet          Node ha-600773-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m54s                  node-controller  Node ha-600773-m04 event: Registered Node ha-600773-m04 in Controller
	  Normal   NodeHasSufficientMemory  7m54s (x2 over 7m54s)  kubelet          Node ha-600773-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           7m53s                  node-controller  Node ha-600773-m04 event: Registered Node ha-600773-m04 in Controller
	  Normal   RegisteredNode           7m49s                  node-controller  Node ha-600773-m04 event: Registered Node ha-600773-m04 in Controller
	  Normal   NodeReady                7m41s                  kubelet          Node ha-600773-m04 status is now: NodeReady
	  Normal   RegisteredNode           5m10s                  node-controller  Node ha-600773-m04 event: Registered Node ha-600773-m04 in Controller
	  Normal   NodeNotReady             4m30s                  node-controller  Node ha-600773-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           4m16s                  node-controller  Node ha-600773-m04 event: Registered Node ha-600773-m04 in Controller
	  Normal   RegisteredNode           3m55s                  node-controller  Node ha-600773-m04 event: Registered Node ha-600773-m04 in Controller
	  Normal   Starting                 3m39s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m39s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     3m32s (x7 over 3m39s)  kubelet          Node ha-600773-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    3m26s (x8 over 3m39s)  kubelet          Node ha-600773-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  3m26s (x8 over 3m39s)  kubelet          Node ha-600773-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           92s                    node-controller  Node ha-600773-m04 event: Registered Node ha-600773-m04 in Controller
	  Normal   NodeNotReady             52s                    node-controller  Node ha-600773-m04 status is now: NodeNotReady
	  Normal   Starting                 42s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 42s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     36s (x7 over 42s)      kubelet          Node ha-600773-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    29s (x8 over 42s)      kubelet          Node ha-600773-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  29s (x8 over 42s)      kubelet          Node ha-600773-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           27s                    node-controller  Node ha-600773-m04 event: Registered Node ha-600773-m04 in Controller
	
	
	==> dmesg <==
	[Oct 7 11:30] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [09b5383c8dbc46051dec16099489f7ec3c7b75009aef2e25f0953baa399e8407] <==
	{"level":"warn","ts":"2024-10-07T12:30:02.463516Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T12:29:59.185621Z","time spent":"3.277887748s","remote":"127.0.0.1:58574","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":2,"response size":7205,"request content":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" limit:10000 "}
	{"level":"warn","ts":"2024-10-07T12:30:02.463651Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.278010339s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets/\" range_end:\"/registry/statefulsets0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T12:30:02.463702Z","caller":"traceutil/trace.go:171","msg":"trace[79162321] range","detail":"{range_begin:/registry/statefulsets/; range_end:/registry/statefulsets0; response_count:0; response_revision:2563; }","duration":"3.278063492s","start":"2024-10-07T12:29:59.185632Z","end":"2024-10-07T12:30:02.463696Z","steps":["trace[79162321] 'agreement among raft nodes before linearized reading'  (duration: 3.277996973s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T12:30:02.463746Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T12:29:59.185597Z","time spent":"3.278141292s","remote":"127.0.0.1:58576","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":28,"request content":"key:\"/registry/statefulsets/\" range_end:\"/registry/statefulsets0\" limit:10000 "}
	{"level":"warn","ts":"2024-10-07T12:30:02.463968Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.278357701s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" limit:10000 ","response":"range_response_count:2 size:5899"}
	{"level":"info","ts":"2024-10-07T12:30:02.464025Z","caller":"traceutil/trace.go:171","msg":"trace[2126929093] range","detail":"{range_begin:/registry/replicasets/; range_end:/registry/replicasets0; response_count:2; response_revision:2563; }","duration":"3.278416785s","start":"2024-10-07T12:29:59.185601Z","end":"2024-10-07T12:30:02.464018Z","steps":["trace[2126929093] 'agreement among raft nodes before linearized reading'  (duration: 3.278261357s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T12:30:02.464070Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T12:29:59.185590Z","time spent":"3.278473433s","remote":"127.0.0.1:58592","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":2,"response size":5922,"request content":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" limit:10000 "}
	{"level":"warn","ts":"2024-10-07T12:30:02.464241Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.278717714s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/\" range_end:\"/registry/daemonsets0\" limit:10000 ","response":"range_response_count:2 size:7586"}
	{"level":"info","ts":"2024-10-07T12:30:02.464322Z","caller":"traceutil/trace.go:171","msg":"trace[2051638489] range","detail":"{range_begin:/registry/daemonsets/; range_end:/registry/daemonsets0; response_count:2; response_revision:2563; }","duration":"3.278799625s","start":"2024-10-07T12:29:59.185515Z","end":"2024-10-07T12:30:02.464315Z","steps":["trace[2051638489] 'agreement among raft nodes before linearized reading'  (duration: 3.278672242s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T12:30:02.464371Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T12:29:59.185490Z","time spent":"3.278870058s","remote":"127.0.0.1:58588","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":2,"response size":7609,"request content":"key:\"/registry/daemonsets/\" range_end:\"/registry/daemonsets0\" limit:10000 "}
	{"level":"warn","ts":"2024-10-07T12:30:02.464557Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.279973162s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" limit:10000 ","response":"range_response_count:2 size:5901"}
	{"level":"info","ts":"2024-10-07T12:30:02.464612Z","caller":"traceutil/trace.go:171","msg":"trace[624010243] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; response_count:2; response_revision:2563; }","duration":"3.280029769s","start":"2024-10-07T12:29:59.184575Z","end":"2024-10-07T12:30:02.464605Z","steps":["trace[624010243] 'agreement among raft nodes before linearized reading'  (duration: 3.279926336s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T12:30:02.464659Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T12:29:59.184563Z","time spent":"3.280087057s","remote":"127.0.0.1:58594","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":2,"response size":5924,"request content":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" limit:10000 "}
	{"level":"warn","ts":"2024-10-07T12:30:02.464939Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.280436035s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" limit:10000 ","response":"range_response_count:13 size:14397"}
	{"level":"info","ts":"2024-10-07T12:30:02.465362Z","caller":"traceutil/trace.go:171","msg":"trace[1312793438] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:13; response_revision:2563; }","duration":"3.280859309s","start":"2024-10-07T12:29:59.184493Z","end":"2024-10-07T12:30:02.465352Z","steps":["trace[1312793438] 'agreement among raft nodes before linearized reading'  (duration: 3.280340347s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T12:30:02.465452Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T12:29:59.184456Z","time spent":"3.280981712s","remote":"127.0.0.1:58560","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":13,"response size":14420,"request content":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" limit:10000 "}
	{"level":"warn","ts":"2024-10-07T12:30:02.465618Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.287840174s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T12:30:02.465675Z","caller":"traceutil/trace.go:171","msg":"trace[146197601] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:2563; }","duration":"3.287898619s","start":"2024-10-07T12:29:59.177770Z","end":"2024-10-07T12:30:02.465668Z","steps":["trace[146197601] 'agreement among raft nodes before linearized reading'  (duration: 3.287824421s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T12:30:02.465729Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T12:29:59.177758Z","time spent":"3.287956375s","remote":"127.0.0.1:58606","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":28,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:10000 "}
	{"level":"warn","ts":"2024-10-07T12:30:02.465864Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.288092529s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T12:30:02.465918Z","caller":"traceutil/trace.go:171","msg":"trace[392310568] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; response_count:0; response_revision:2563; }","duration":"3.288146125s","start":"2024-10-07T12:29:59.177763Z","end":"2024-10-07T12:30:02.465909Z","steps":["trace[392310568] 'agreement among raft nodes before linearized reading'  (duration: 3.288078958s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T12:30:02.465966Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T12:29:59.177750Z","time spent":"3.288208671s","remote":"127.0.0.1:58616","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":0,"response size":28,"request content":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" limit:10000 "}
	{"level":"warn","ts":"2024-10-07T12:30:02.466114Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.288370057s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T12:30:02.466164Z","caller":"traceutil/trace.go:171","msg":"trace[1359020422] range","detail":"{range_begin:/registry/validatingadmissionpolicybindings/; range_end:/registry/validatingadmissionpolicybindings0; response_count:0; response_revision:2563; }","duration":"3.288421658s","start":"2024-10-07T12:29:59.177736Z","end":"2024-10-07T12:30:02.466157Z","steps":["trace[1359020422] 'agreement among raft nodes before linearized reading'  (duration: 3.288357733s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T12:30:02.466209Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T12:29:59.177712Z","time spent":"3.288489153s","remote":"127.0.0.1:58630","response type":"/etcdserverpb.KV/Range","request count":0,"request size":95,"response count":0,"response size":28,"request content":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" limit:10000 "}
	
	
	==> kernel <==
	 12:31:42 up  8:14,  0 users,  load average: 1.81, 2.48, 2.00
	Linux ha-600773 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [d10bfc28c0e3fef2bb68e6968d5a91d9e1e34250fca94d879161137bfdefba59] <==
	I1007 12:31:05.925021       1 main.go:322] Node ha-600773-m02 has CIDR [10.244.1.0/24] 
	I1007 12:31:05.925255       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1007 12:31:05.925339       1 main.go:295] Handling node with IPs: map[192.168.58.5:{}]
	I1007 12:31:05.925355       1 main.go:322] Node ha-600773-m04 has CIDR [10.244.3.0/24] 
	I1007 12:31:05.925399       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.58.5 Flags: [] Table: 0} 
	I1007 12:31:05.925443       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:31:05.925450       1 main.go:299] handling current node
	I1007 12:31:15.918081       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:31:15.918114       1 main.go:299] handling current node
	I1007 12:31:15.918130       1 main.go:295] Handling node with IPs: map[192.168.58.3:{}]
	I1007 12:31:15.918136       1 main.go:322] Node ha-600773-m02 has CIDR [10.244.1.0/24] 
	I1007 12:31:15.918243       1 main.go:295] Handling node with IPs: map[192.168.58.5:{}]
	I1007 12:31:15.918303       1 main.go:322] Node ha-600773-m04 has CIDR [10.244.3.0/24] 
	I1007 12:31:25.918220       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:31:25.918260       1 main.go:299] handling current node
	I1007 12:31:25.918277       1 main.go:295] Handling node with IPs: map[192.168.58.3:{}]
	I1007 12:31:25.918284       1 main.go:322] Node ha-600773-m02 has CIDR [10.244.1.0/24] 
	I1007 12:31:25.918407       1 main.go:295] Handling node with IPs: map[192.168.58.5:{}]
	I1007 12:31:25.918499       1 main.go:322] Node ha-600773-m04 has CIDR [10.244.3.0/24] 
	I1007 12:31:35.917433       1 main.go:295] Handling node with IPs: map[192.168.58.5:{}]
	I1007 12:31:35.917554       1 main.go:322] Node ha-600773-m04 has CIDR [10.244.3.0/24] 
	I1007 12:31:35.917708       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 12:31:35.917747       1 main.go:299] handling current node
	I1007 12:31:35.917786       1 main.go:295] Handling node with IPs: map[192.168.58.3:{}]
	I1007 12:31:35.917822       1 main.go:322] Node ha-600773-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0f3b503b6214dd9fcd7b03d1e42c37592e70de5eef8a1ca004e446a13988c189] <==
	W1007 12:30:02.419381       1 reflector.go:561] storage/cacher.go:/apiextensions.k8s.io/customresourcedefinitions: failed to list *apiextensions.CustomResourceDefinition: etcdserver: leader changed
	E1007 12:30:02.419450       1 cacher.go:478] cacher (customresourcedefinitions.apiextensions.k8s.io): unexpected ListAndWatch error: failed to list *apiextensions.CustomResourceDefinition: etcdserver: leader changed; reinitializing...
	I1007 12:30:02.545291       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1007 12:30:02.551637       1 aggregator.go:171] initial CRD sync complete...
	I1007 12:30:02.552724       1 autoregister_controller.go:144] Starting autoregister controller
	I1007 12:30:02.552766       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1007 12:30:02.590920       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1007 12:30:02.599246       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1007 12:30:02.599664       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1007 12:30:02.599724       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1007 12:30:02.600667       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1007 12:30:02.602647       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1007 12:30:02.602677       1 policy_source.go:224] refreshing policies
	I1007 12:30:02.612139       1 shared_informer.go:320] Caches are synced for configmaps
	I1007 12:30:02.612232       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1007 12:30:02.612375       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1007 12:30:02.620388       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W1007 12:30:02.637538       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.58.3]
	I1007 12:30:02.639147       1 controller.go:615] quota admission added evaluator for: endpoints
	I1007 12:30:02.648549       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1007 12:30:02.655998       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1007 12:30:02.660005       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1007 12:30:02.672367       1 cache.go:39] Caches are synced for autoregister controller
	I1007 12:30:02.677504       1 shared_informer.go:320] Caches are synced for node_authorizer
	F1007 12:30:46.199986       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-apiserver [a9fad3c41c8e336ca845978e0cadccdaab5d33b1d7fae0f1f4058afa9609270c] <==
	I1007 12:30:50.778405       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1007 12:30:50.778445       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1007 12:30:50.778953       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I1007 12:30:50.779012       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I1007 12:30:50.903623       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1007 12:30:50.908407       1 policy_source.go:224] refreshing policies
	I1007 12:30:50.930865       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1007 12:30:50.951358       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1007 12:30:50.951552       1 shared_informer.go:320] Caches are synced for configmaps
	I1007 12:30:50.951849       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1007 12:30:50.952582       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1007 12:30:50.962658       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1007 12:30:50.962748       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1007 12:30:50.966546       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1007 12:30:50.969643       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1007 12:30:50.973511       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1007 12:30:50.980397       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1007 12:30:50.980537       1 aggregator.go:171] initial CRD sync complete...
	I1007 12:30:50.980578       1 autoregister_controller.go:144] Starting autoregister controller
	I1007 12:30:50.980608       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1007 12:30:50.980648       1 cache.go:39] Caches are synced for autoregister controller
	I1007 12:30:51.358087       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1007 12:30:51.912561       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.58.2 192.168.58.3]
	I1007 12:30:51.914398       1 controller.go:615] quota admission added evaluator for: endpoints
	I1007 12:30:51.924236       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [18e5cc8b86e49957aea0a7f6e92f3a8b7054a73d52a090fe0b37feb966220fa4] <==
	I1007 12:31:15.632124       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="66.724µs"
	I1007 12:31:15.867076       1 shared_informer.go:320] Caches are synced for garbage collector
	I1007 12:31:15.867110       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1007 12:31:15.872648       1 shared_informer.go:320] Caches are synced for garbage collector
	I1007 12:31:28.744196       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="87.626713ms"
	I1007 12:31:28.744378       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="80.877µs"
	I1007 12:31:35.575075       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-600773"
	I1007 12:31:35.575214       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-600773-m04"
	I1007 12:31:35.593215       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-600773"
	I1007 12:31:35.758551       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.277063ms"
	I1007 12:31:35.758698       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="77.595µs"
	I1007 12:31:36.175408       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-bn2vf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-bn2vf\": the object has been modified; please apply your changes to the latest version and try again"
	I1007 12:31:36.176146       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"9cc5ae65-8626-482a-8b1a-4ea4bc33eb66", APIVersion:"v1", ResourceVersion:"240", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-bn2vf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-bn2vf": the object has been modified; please apply your changes to the latest version and try again
	I1007 12:31:36.227728       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-bn2vf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-bn2vf\": the object has been modified; please apply your changes to the latest version and try again"
	I1007 12:31:36.227875       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"9cc5ae65-8626-482a-8b1a-4ea4bc33eb66", APIVersion:"v1", ResourceVersion:"240", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-bn2vf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-bn2vf": the object has been modified; please apply your changes to the latest version and try again
	I1007 12:31:36.240523       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="164.561511ms"
	E1007 12:31:36.242026       1 replica_set.go:560] "Unhandled Error" err="sync \"kube-system/coredns-7c65d6cfc9\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-7c65d6cfc9\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1007 12:31:36.243333       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="105.238µs"
	I1007 12:31:36.252507       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="93.685µs"
	I1007 12:31:40.355050       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-600773"
	I1007 12:31:41.313339       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-600773"
	I1007 12:31:41.313397       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-600773-m04"
	I1007 12:31:41.332690       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-600773"
	I1007 12:31:41.881127       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="116.487817ms"
	I1007 12:31:41.881375       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="171.059µs"
	
	
	==> kube-controller-manager [65c86b8a73c1b9c3cde7a5c264d9a59ba13d3740de5152f95da836c85b11fc05] <==
	I1007 12:30:26.287657       1 serving.go:386] Generated self-signed cert in-memory
	I1007 12:30:28.356401       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1007 12:30:28.356432       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:30:28.358874       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1007 12:30:28.359961       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1007 12:30:28.360079       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1007 12:30:28.360141       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1007 12:30:38.379366       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [0ea5b1d78d34a8f3f68ff9b282d7de65ef03bf10f9fd24eaa3823f0e256f3f79] <==
	I1007 12:30:25.787789       1 server_linux.go:66] "Using iptables proxy"
	I1007 12:30:26.057927       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.58.2"]
	E1007 12:30:26.058000       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 12:30:26.103080       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1007 12:30:26.103144       1 server_linux.go:169] "Using iptables Proxier"
	I1007 12:30:26.105103       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 12:30:26.105383       1 server.go:483] "Version info" version="v1.31.1"
	I1007 12:30:26.105406       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:30:26.109537       1 config.go:199] "Starting service config controller"
	I1007 12:30:26.109592       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 12:30:26.109773       1 config.go:105] "Starting endpoint slice config controller"
	I1007 12:30:26.109785       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 12:30:26.113616       1 config.go:328] "Starting node config controller"
	I1007 12:30:26.113638       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 12:30:26.210599       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 12:30:26.210669       1 shared_informer.go:320] Caches are synced for service config
	I1007 12:30:26.214673       1 shared_informer.go:320] Caches are synced for node config
	W1007 12:31:35.943225       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2837": http2: client connection lost
	W1007 12:31:35.943340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2776": http2: client connection lost
	E1007 12:31:35.943390       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2776\": http2: client connection lost" logger="UnhandledError"
	E1007 12:31:35.943400       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2837\": http2: client connection lost" logger="UnhandledError"
	W1007 12:31:35.943225       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-600773&resourceVersion=2776": http2: client connection lost
	E1007 12:31:35.943478       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-600773&resourceVersion=2776\": http2: client connection lost" logger="UnhandledError"
	
	
	==> kube-scheduler [2d2a683539a033549d9fb1fb1efad662502c4dfc164457766589c4f1edffa97c] <==
	E1007 12:29:56.590394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:29:56.799474       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 12:29:56.799516       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:29:56.805118       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 12:29:56.805157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:29:57.296780       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 12:29:57.296934       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:30:01.043374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 12:30:01.043540       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1007 12:30:05.549146       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1007 12:30:50.871444       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.58.2:60596->192.168.58.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 12:30:50.876559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.58.2:60504->192.168.58.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 12:30:50.877112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.58.2:60476->192.168.58.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 12:30:50.878007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.58.2:60466->192.168.58.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 12:30:50.878107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.58.2:60454->192.168.58.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 12:30:50.878685       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.58.2:60580->192.168.58.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 12:30:50.879539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.58.2:60566->192.168.58.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 12:30:50.881767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.58.2:60562->192.168.58.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 12:30:50.882768       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.58.2:60554->192.168.58.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 12:30:50.882877       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.58.2:60544->192.168.58.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 12:30:50.882986       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.58.2:60534->192.168.58.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 12:30:50.883083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.58.2:60524->192.168.58.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 12:30:50.884728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.58.2:60512->192.168.58.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 12:30:50.884865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.58.2:60474->192.168.58.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 12:30:50.885343       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.58.2:60492->192.168.58.2:8443: read: connection reset by peer" logger="UnhandledError"
	
	
	==> kubelet <==
	Oct 07 12:31:30 ha-600773 kubelet[760]: E1007 12:31:30.082667     760 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-600773?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Oct 07 12:31:31 ha-600773 kubelet[760]: E1007 12:31:31.767060     760 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304291766883274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:31:31 ha-600773 kubelet[760]: E1007 12:31:31.767101     760 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304291766883274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:31:36 ha-600773 kubelet[760]: E1007 12:31:36.006979     760 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-600773?timeout=10s\": http2: client connection lost"
	Oct 07 12:31:36 ha-600773 kubelet[760]: I1007 12:31:36.007042     760 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Oct 07 12:31:36 ha-600773 kubelet[760]: E1007 12:31:36.007549     760 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-600773.17fc2b7e54826a31\": http2: client connection lost" event="&Event{ObjectMeta:{kube-apiserver-ha-600773.17fc2b7e54826a31  kube-system   2710 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-600773,UID:687705adcdcb96dc5240ebfd39b7cb8e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:ha-600773,},FirstTimestamp:2024-10-07 12:29:38 +0000 UTC,LastTimestamp:2024-10-07 12:30:46.995026593 +0000 UTC m=+75.473612610,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporti
ngInstance:ha-600773,}"
	Oct 07 12:31:36 ha-600773 kubelet[760]: I1007 12:31:36.007722     760 status_manager.go:851] "Failed to get status for pod" podUID="687705adcdcb96dc5240ebfd39b7cb8e" pod="kube-system/kube-apiserver-ha-600773" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-600773\": http2: client connection lost"
	Oct 07 12:31:36 ha-600773 kubelet[760]: W1007 12:31:36.008151     760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2674": http2: client connection lost
	Oct 07 12:31:36 ha-600773 kubelet[760]: E1007 12:31:36.008221     760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2674\": http2: client connection lost" logger="UnhandledError"
	Oct 07 12:31:36 ha-600773 kubelet[760]: W1007 12:31:36.009913     760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2672": http2: client connection lost
	Oct 07 12:31:36 ha-600773 kubelet[760]: E1007 12:31:36.010008     760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2672\": http2: client connection lost" logger="UnhandledError"
	Oct 07 12:31:36 ha-600773 kubelet[760]: W1007 12:31:36.010063     760 reflector.go:561] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-600773&resourceVersion=2879": http2: client connection lost
	Oct 07 12:31:36 ha-600773 kubelet[760]: E1007 12:31:36.010080     760 reflector.go:158] "Unhandled Error" err="pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-600773&resourceVersion=2879\": http2: client connection lost" logger="UnhandledError"
	Oct 07 12:31:36 ha-600773 kubelet[760]: W1007 12:31:36.010132     760 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=2778": http2: client connection lost
	Oct 07 12:31:36 ha-600773 kubelet[760]: E1007 12:31:36.010151     760 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=2778\": http2: client connection lost" logger="UnhandledError"
	Oct 07 12:31:36 ha-600773 kubelet[760]: W1007 12:31:36.010204     760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2755": http2: client connection lost
	Oct 07 12:31:36 ha-600773 kubelet[760]: E1007 12:31:36.010234     760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2755\": http2: client connection lost" logger="UnhandledError"
	Oct 07 12:31:36 ha-600773 kubelet[760]: W1007 12:31:36.010283     760 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2778": http2: client connection lost
	Oct 07 12:31:36 ha-600773 kubelet[760]: E1007 12:31:36.010300     760 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2778\": http2: client connection lost" logger="UnhandledError"
	Oct 07 12:31:36 ha-600773 kubelet[760]: W1007 12:31:36.010352     760 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2778": http2: client connection lost
	Oct 07 12:31:36 ha-600773 kubelet[760]: E1007 12:31:36.010370     760 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2778\": http2: client connection lost" logger="UnhandledError"
	Oct 07 12:31:36 ha-600773 kubelet[760]: W1007 12:31:36.010418     760 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2778": http2: client connection lost
	Oct 07 12:31:36 ha-600773 kubelet[760]: E1007 12:31:36.010434     760 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2778\": http2: client connection lost" logger="UnhandledError"
	Oct 07 12:31:41 ha-600773 kubelet[760]: E1007 12:31:41.768748     760 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304301768109820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:31:41 ha-600773 kubelet[760]: E1007 12:31:41.768793     760 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304301768109820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-600773 -n ha-600773
helpers_test.go:261: (dbg) Run:  kubectl --context ha-600773 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (139.74s)

                                                
                                    

Test pass (295/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 5.96
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 13.35
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 4.93
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.1
18 TestDownloadOnly/v1.31.1/DeleteAll 13.38
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.57
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 213.23
31 TestAddons/serial/GCPAuth/Namespaces 0.2
34 TestAddons/parallel/Registry 16.84
36 TestAddons/parallel/InspektorGadget 11.81
39 TestAddons/parallel/CSI 39.58
40 TestAddons/parallel/Headlamp 17.64
41 TestAddons/parallel/CloudSpanner 6.58
42 TestAddons/parallel/LocalPath 51.29
43 TestAddons/parallel/NvidiaDevicePlugin 6.5
44 TestAddons/parallel/Yakd 11.75
45 TestAddons/StoppedEnableDisable 12.47
46 TestCertOptions 34.68
47 TestCertExpiration 241.89
49 TestForceSystemdFlag 35.3
50 TestForceSystemdEnv 36.55
56 TestErrorSpam/setup 31.77
57 TestErrorSpam/start 0.82
58 TestErrorSpam/status 1.05
59 TestErrorSpam/pause 1.85
60 TestErrorSpam/unpause 1.79
61 TestErrorSpam/stop 1.59
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 81.53
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 16.37
68 TestFunctional/serial/KubeContext 0.05
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.49
73 TestFunctional/serial/CacheCmd/cache/add_local 1.39
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.94
78 TestFunctional/serial/CacheCmd/cache/delete 0.12
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
81 TestFunctional/serial/ExtraConfig 42.99
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.65
84 TestFunctional/serial/LogsFileCmd 1.71
85 TestFunctional/serial/InvalidService 4.93
87 TestFunctional/parallel/ConfigCmd 0.49
88 TestFunctional/parallel/DashboardCmd 12.7
89 TestFunctional/parallel/DryRun 0.43
90 TestFunctional/parallel/InternationalLanguage 0.2
91 TestFunctional/parallel/StatusCmd 1.23
95 TestFunctional/parallel/ServiceCmdConnect 11.88
96 TestFunctional/parallel/AddonsCmd 0.31
97 TestFunctional/parallel/PersistentVolumeClaim 27.24
99 TestFunctional/parallel/SSHCmd 0.72
100 TestFunctional/parallel/CpCmd 2.28
102 TestFunctional/parallel/FileSync 0.37
103 TestFunctional/parallel/CertSync 2.11
107 TestFunctional/parallel/NodeLabels 0.16
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.71
111 TestFunctional/parallel/License 0.23
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.48
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.15
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.25
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
125 TestFunctional/parallel/ProfileCmd/profile_list 0.48
126 TestFunctional/parallel/ServiceCmd/List 0.6
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.53
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.66
129 TestFunctional/parallel/MountCmd/any-port 9.83
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.88
131 TestFunctional/parallel/ServiceCmd/Format 0.43
132 TestFunctional/parallel/ServiceCmd/URL 0.47
133 TestFunctional/parallel/MountCmd/specific-port 2.51
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.95
135 TestFunctional/parallel/Version/short 0.1
136 TestFunctional/parallel/Version/components 0.96
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.66
142 TestFunctional/parallel/ImageCommands/Setup 0.78
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.57
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.13
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.38
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.6
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.77
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 173.79
160 TestMultiControlPlane/serial/DeployApp 9.41
161 TestMultiControlPlane/serial/PingHostFromPods 1.65
162 TestMultiControlPlane/serial/AddWorkerNode 34.45
163 TestMultiControlPlane/serial/NodeLabels 0.12
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.02
165 TestMultiControlPlane/serial/CopyFile 18.6
166 TestMultiControlPlane/serial/StopSecondaryNode 12.74
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
168 TestMultiControlPlane/serial/RestartSecondaryNode 22.92
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.47
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 209.32
171 TestMultiControlPlane/serial/DeleteSecondaryNode 12.2
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
173 TestMultiControlPlane/serial/StopCluster 35.91
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.72
176 TestMultiControlPlane/serial/AddSecondaryNode 70.96
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.97
181 TestJSONOutput/start/Command 47.84
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.74
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.66
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.87
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.22
206 TestKicCustomNetwork/create_custom_network 42.27
207 TestKicCustomNetwork/use_default_bridge_network 36.33
208 TestKicExistingNetwork 30.73
209 TestKicCustomSubnet 33.87
210 TestKicStaticIP 35.27
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 66.17
215 TestMountStart/serial/StartWithMountFirst 9.65
216 TestMountStart/serial/VerifyMountFirst 0.25
217 TestMountStart/serial/StartWithMountSecond 6.65
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.63
220 TestMountStart/serial/VerifyMountPostDelete 0.27
221 TestMountStart/serial/Stop 1.21
222 TestMountStart/serial/RestartStopped 7.89
223 TestMountStart/serial/VerifyMountPostStop 0.27
226 TestMultiNode/serial/FreshStart2Nodes 106.14
227 TestMultiNode/serial/DeployApp2Nodes 6.18
228 TestMultiNode/serial/PingHostFrom2Pods 0.96
229 TestMultiNode/serial/AddNode 27.75
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.68
232 TestMultiNode/serial/CopyFile 10.08
233 TestMultiNode/serial/StopNode 2.24
234 TestMultiNode/serial/StartAfterStop 9.75
235 TestMultiNode/serial/RestartKeepsNodes 111.59
236 TestMultiNode/serial/DeleteNode 5.55
237 TestMultiNode/serial/StopMultiNode 23.9
238 TestMultiNode/serial/RestartMultiNode 60.47
239 TestMultiNode/serial/ValidateNameConflict 34.79
244 TestPreload 126.37
246 TestScheduledStopUnix 104.36
249 TestInsufficientStorage 10.78
250 TestRunningBinaryUpgrade 73.09
252 TestKubernetesUpgrade 392.95
253 TestMissingContainerUpgrade 157.9
255 TestPause/serial/Start 83.03
256 TestPause/serial/SecondStartNoReconfiguration 22.38
257 TestPause/serial/Pause 0.76
258 TestPause/serial/VerifyStatus 0.3
259 TestPause/serial/Unpause 0.65
260 TestPause/serial/PauseAgain 0.84
261 TestPause/serial/DeletePaused 2.65
262 TestPause/serial/VerifyDeletedResources 0.14
263 TestStoppedBinaryUpgrade/Setup 1
264 TestStoppedBinaryUpgrade/Upgrade 78.97
265 TestStoppedBinaryUpgrade/MinikubeLogs 1.02
274 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
275 TestNoKubernetes/serial/StartWithK8s 32.04
276 TestNoKubernetes/serial/StartWithStopK8s 6.88
277 TestNoKubernetes/serial/Start 6.5
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
279 TestNoKubernetes/serial/ProfileList 16.7
280 TestNoKubernetes/serial/Stop 1.2
281 TestNoKubernetes/serial/StartNoArgs 6.86
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.25
290 TestNetworkPlugins/group/false 3.98
295 TestStartStop/group/old-k8s-version/serial/FirstStart 183.2
297 TestStartStop/group/no-preload/serial/FirstStart 60.84
298 TestStartStop/group/no-preload/serial/DeployApp 10.39
299 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
300 TestStartStop/group/no-preload/serial/Stop 12.2
301 TestStartStop/group/old-k8s-version/serial/DeployApp 10.61
302 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
303 TestStartStop/group/no-preload/serial/SecondStart 267.4
304 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.45
305 TestStartStop/group/old-k8s-version/serial/Stop 12.26
306 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
307 TestStartStop/group/old-k8s-version/serial/SecondStart 138.71
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.14
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
311 TestStartStop/group/old-k8s-version/serial/Pause 3.07
313 TestStartStop/group/embed-certs/serial/FirstStart 82.89
314 TestStartStop/group/embed-certs/serial/DeployApp 9.41
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.15
317 TestStartStop/group/embed-certs/serial/Stop 12.04
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
319 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
320 TestStartStop/group/no-preload/serial/Pause 3.29
321 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.37
322 TestStartStop/group/embed-certs/serial/SecondStart 296.37
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.85
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.37
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.96
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.19
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.36
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
333 TestStartStop/group/embed-certs/serial/Pause 3.11
335 TestStartStop/group/newest-cni/serial/FirstStart 36.52
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.12
338 TestStartStop/group/newest-cni/serial/Stop 1.28
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
340 TestStartStop/group/newest-cni/serial/SecondStart 16.09
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.35
344 TestStartStop/group/newest-cni/serial/Pause 3.25
345 TestNetworkPlugins/group/auto/Start 53.01
346 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
347 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
348 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
349 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.84
350 TestNetworkPlugins/group/kindnet/Start 52.72
351 TestNetworkPlugins/group/auto/KubeletFlags 0.44
352 TestNetworkPlugins/group/auto/NetCatPod 12.37
353 TestNetworkPlugins/group/auto/DNS 0.19
354 TestNetworkPlugins/group/auto/Localhost 0.17
355 TestNetworkPlugins/group/auto/HairPin 0.16
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
358 TestNetworkPlugins/group/kindnet/NetCatPod 12.37
359 TestNetworkPlugins/group/calico/Start 69.83
360 TestNetworkPlugins/group/kindnet/DNS 0.22
361 TestNetworkPlugins/group/kindnet/Localhost 0.18
362 TestNetworkPlugins/group/kindnet/HairPin 0.35
363 TestNetworkPlugins/group/custom-flannel/Start 63.87
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.35
366 TestNetworkPlugins/group/calico/NetCatPod 12.35
367 TestNetworkPlugins/group/calico/DNS 0.21
368 TestNetworkPlugins/group/calico/Localhost 0.15
369 TestNetworkPlugins/group/calico/HairPin 0.16
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.35
372 TestNetworkPlugins/group/custom-flannel/DNS 0.24
373 TestNetworkPlugins/group/enable-default-cni/Start 84.35
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.23
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
376 TestNetworkPlugins/group/flannel/Start 58.37
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.3
379 TestNetworkPlugins/group/flannel/ControllerPod 6.01
380 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
381 TestNetworkPlugins/group/flannel/NetCatPod 12.28
382 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
383 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
384 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
385 TestNetworkPlugins/group/flannel/DNS 0.25
386 TestNetworkPlugins/group/flannel/Localhost 0.2
387 TestNetworkPlugins/group/flannel/HairPin 0.18
388 TestNetworkPlugins/group/bridge/Start 44.8
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
390 TestNetworkPlugins/group/bridge/NetCatPod 10.24
391 TestNetworkPlugins/group/bridge/DNS 25.9
392 TestNetworkPlugins/group/bridge/Localhost 0.15
393 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (5.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-513494 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-513494 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.96404518s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (5.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1007 11:57:23.651118 1178462 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1007 11:57:23.651196 1178462 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-1173066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-513494
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-513494: exit status 85 (74.279694ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-513494 | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC |          |
	|         | -p download-only-513494        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:57:17
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:57:17.737960 1178467 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:57:17.738171 1178467 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:57:17.738201 1178467 out.go:358] Setting ErrFile to fd 2...
	I1007 11:57:17.738224 1178467 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:57:17.738513 1178467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1173066/.minikube/bin
	W1007 11:57:17.738697 1178467 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19763-1173066/.minikube/config/config.json: open /home/jenkins/minikube-integration/19763-1173066/.minikube/config/config.json: no such file or directory
	I1007 11:57:17.739151 1178467 out.go:352] Setting JSON to true
	I1007 11:57:17.740137 1178467 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27582,"bootTime":1728274656,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1007 11:57:17.740235 1178467 start.go:139] virtualization:  
	I1007 11:57:17.742877 1178467 out.go:97] [download-only-513494] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1007 11:57:17.743034 1178467 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19763-1173066/.minikube/cache/preloaded-tarball: no such file or directory
	I1007 11:57:17.743153 1178467 notify.go:220] Checking for updates...
	I1007 11:57:17.745090 1178467 out.go:169] MINIKUBE_LOCATION=19763
	I1007 11:57:17.746881 1178467 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:57:17.748623 1178467 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19763-1173066/kubeconfig
	I1007 11:57:17.750451 1178467 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1173066/.minikube
	I1007 11:57:17.752143 1178467 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1007 11:57:17.755289 1178467 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1007 11:57:17.755641 1178467 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:57:17.776889 1178467 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 11:57:17.777014 1178467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 11:57:17.849369 1178467 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:62 SystemTime:2024-10-07 11:57:17.839567221 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 11:57:17.849484 1178467 docker.go:318] overlay module found
	I1007 11:57:17.851355 1178467 out.go:97] Using the docker driver based on user configuration
	I1007 11:57:17.851389 1178467 start.go:297] selected driver: docker
	I1007 11:57:17.851404 1178467 start.go:901] validating driver "docker" against <nil>
	I1007 11:57:17.851514 1178467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 11:57:17.901361 1178467 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:62 SystemTime:2024-10-07 11:57:17.891145843 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 11:57:17.901579 1178467 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 11:57:17.901872 1178467 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1007 11:57:17.902031 1178467 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 11:57:17.903879 1178467 out.go:169] Using Docker driver with root privileges
	I1007 11:57:17.905575 1178467 cni.go:84] Creating CNI manager for ""
	I1007 11:57:17.905653 1178467 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 11:57:17.905667 1178467 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 11:57:17.905750 1178467 start.go:340] cluster config:
	{Name:download-only-513494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-513494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:57:17.907775 1178467 out.go:97] Starting "download-only-513494" primary control-plane node in "download-only-513494" cluster
	I1007 11:57:17.907807 1178467 cache.go:121] Beginning downloading kic base image for docker with crio
	I1007 11:57:17.909473 1178467 out.go:97] Pulling base image v0.0.45-1727731891-master ...
	I1007 11:57:17.909505 1178467 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 11:57:17.909689 1178467 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 11:57:17.928266 1178467 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1007 11:57:17.928291 1178467 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1007 11:57:17.928450 1178467 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1007 11:57:17.928562 1178467 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1007 11:57:17.961842 1178467 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1007 11:57:17.961882 1178467 cache.go:56] Caching tarball of preloaded images
	I1007 11:57:17.962430 1178467 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 11:57:17.964607 1178467 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1007 11:57:17.964647 1178467 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1007 11:57:18.051880 1178467 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19763-1173066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1007 11:57:21.967637 1178467 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1007 11:57:21.967731 1178467 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19763-1173066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1007 11:57:22.443246 1178467 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	
	
	* The control-plane node download-only-513494 host does not exist
	  To start a cluster, run: "minikube start -p download-only-513494"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (13.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-linux-arm64 delete --all: (13.349349675s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (13.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-513494
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (4.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-459102 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-459102 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.925167126s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (4.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1007 11:57:42.132002 1178462 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1007 11:57:42.132046 1178462 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-1173066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-459102
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-459102: exit status 85 (98.976601ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-513494 | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC |                     |
	|         | -p download-only-513494        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC | 07 Oct 24 11:57 UTC |
	| delete  | -p download-only-513494        | download-only-513494 | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC | 07 Oct 24 11:57 UTC |
	| start   | -o=json --download-only        | download-only-459102 | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC |                     |
	|         | -p download-only-459102        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:57:37
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:57:37.252364 1178720 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:57:37.252502 1178720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:57:37.252514 1178720 out.go:358] Setting ErrFile to fd 2...
	I1007 11:57:37.252519 1178720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:57:37.252750 1178720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1173066/.minikube/bin
	I1007 11:57:37.253130 1178720 out.go:352] Setting JSON to true
	I1007 11:57:37.253978 1178720 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27602,"bootTime":1728274656,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1007 11:57:37.254045 1178720 start.go:139] virtualization:  
	I1007 11:57:37.256330 1178720 out.go:97] [download-only-459102] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 11:57:37.256591 1178720 notify.go:220] Checking for updates...
	I1007 11:57:37.258322 1178720 out.go:169] MINIKUBE_LOCATION=19763
	I1007 11:57:37.260158 1178720 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:57:37.261582 1178720 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19763-1173066/kubeconfig
	I1007 11:57:37.263106 1178720 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1173066/.minikube
	I1007 11:57:37.264751 1178720 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1007 11:57:37.267738 1178720 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1007 11:57:37.267965 1178720 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:57:37.296206 1178720 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 11:57:37.296342 1178720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 11:57:37.351478 1178720 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2024-10-07 11:57:37.341952214 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 11:57:37.351588 1178720 docker.go:318] overlay module found
	I1007 11:57:37.353416 1178720 out.go:97] Using the docker driver based on user configuration
	I1007 11:57:37.353440 1178720 start.go:297] selected driver: docker
	I1007 11:57:37.353446 1178720 start.go:901] validating driver "docker" against <nil>
	I1007 11:57:37.353538 1178720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 11:57:37.402770 1178720 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2024-10-07 11:57:37.393665046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 11:57:37.402976 1178720 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 11:57:37.403240 1178720 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1007 11:57:37.403407 1178720 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 11:57:37.405651 1178720 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-459102 host does not exist
	  To start a cluster, run: "minikube start -p download-only-459102"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (13.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-linux-arm64 delete --all: (13.379739417s)
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (13.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-459102
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
I1007 11:57:56.568109 1178462 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-325982 --alsologtostderr --binary-mirror http://127.0.0.1:33869 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-325982" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-325982
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:934: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-504513
addons_test.go:934: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-504513: exit status 85 (77.099631ms)

                                                
                                                
-- stdout --
	* Profile "addons-504513" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-504513"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:945: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-504513
addons_test.go:945: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-504513: exit status 85 (68.871349ms)

                                                
                                                
-- stdout --
	* Profile "addons-504513" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-504513"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (213.23s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-504513 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-504513 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m33.225182144s)
--- PASS: TestAddons/Setup (213.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-504513 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-504513 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 6.336904ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-fb9ws" [b8858fa3-9d16-4d5e-ba15-1cb90ece82b4] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004032919s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-j7gr2" [2a98cc91-7c93-4911-ac0f-e807e5996a10] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.013478673s
addons_test.go:331: (dbg) Run:  kubectl --context addons-504513 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-504513 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-504513 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.821948322s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-504513 ip
2024/10/07 12:09:59 [DEBUG] GET http://192.168.58.2:5000
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-504513 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.84s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7gw4z" [9fda4a5d-4efd-4a4b-b1e6-a7d9a634c45a] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005534514s
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-504513 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-504513 addons disable inspektor-gadget --alsologtostderr -v=1: (5.802317665s)
--- PASS: TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1007 12:09:59.775201 1178462 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1007 12:09:59.786509 1178462 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1007 12:09:59.786545 1178462 kapi.go:107] duration metric: took 11.355696ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 11.365746ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-504513 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504513 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-504513 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4758dd94-e940-4352-b7f7-d9478c713010] Pending
helpers_test.go:344: "task-pv-pod" [4758dd94-e940-4352-b7f7-d9478c713010] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4758dd94-e940-4352-b7f7-d9478c713010] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004308446s
addons_test.go:511: (dbg) Run:  kubectl --context addons-504513 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-504513 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-504513 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-504513 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-504513 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-504513 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504513 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504513 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-504513 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d0eed680-feab-48e5-b8f6-f34e68718643] Pending
helpers_test.go:344: "task-pv-pod-restore" [d0eed680-feab-48e5-b8f6-f34e68718643] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d0eed680-feab-48e5-b8f6-f34e68718643] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00485592s
addons_test.go:553: (dbg) Run:  kubectl --context addons-504513 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-504513 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-504513 delete volumesnapshot new-snapshot-demo
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-504513 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-504513 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-504513 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.828860199s)
--- PASS: TestAddons/parallel/CSI (39.58s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-504513 --alsologtostderr -v=1
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-cfcnr" [96844c44-2503-4d8b-930b-3434e41873c0] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-cfcnr" [96844c44-2503-4d8b-930b-3434e41873c0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-cfcnr" [96844c44-2503-4d8b-930b-3434e41873c0] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003270349s
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-504513 addons disable headlamp --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-504513 addons disable headlamp --alsologtostderr -v=1: (6.731249453s)
--- PASS: TestAddons/parallel/Headlamp (17.64s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-vr46n" [28145496-aeb6-4e85-a1ef-5f328a2a7473] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003502762s
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-504513 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.29s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:883: (dbg) Run:  kubectl --context addons-504513 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:889: (dbg) Run:  kubectl --context addons-504513 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:893: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504513 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504513 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504513 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504513 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504513 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:896: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [bddf3782-de56-47fb-8308-4b7e85b5df53] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [bddf3782-de56-47fb-8308-4b7e85b5df53] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [bddf3782-de56-47fb-8308-4b7e85b5df53] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:896: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003475232s
addons_test.go:901: (dbg) Run:  kubectl --context addons-504513 get pvc test-pvc -o=json
addons_test.go:910: (dbg) Run:  out/minikube-linux-arm64 -p addons-504513 ssh "cat /opt/local-path-provisioner/pvc-2b3d24e7-13fc-45fb-a4ba-0b05f67be457_default_test-pvc/file1"
addons_test.go:922: (dbg) Run:  kubectl --context addons-504513 delete pod test-local-path
addons_test.go:926: (dbg) Run:  kubectl --context addons-504513 delete pvc test-pvc
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-504513 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-504513 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.2379491s)
--- PASS: TestAddons/parallel/LocalPath (51.29s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:958: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zfrr9" [c8079eb2-5614-417f-b0b4-df99129833bd] Running
addons_test.go:958: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00397861s
addons_test.go:961: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-504513
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-w6jm6" [32c6db93-6070-474b-86b6-19cf1abc68c6] Running
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003452629s
addons_test.go:973: (dbg) Run:  out/minikube-linux-arm64 -p addons-504513 addons disable yakd --alsologtostderr -v=1
addons_test.go:973: (dbg) Done: out/minikube-linux-arm64 -p addons-504513 addons disable yakd --alsologtostderr -v=1: (5.749959846s)
--- PASS: TestAddons/parallel/Yakd (11.75s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.47s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-504513
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-504513: (12.200198887s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-504513
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-504513
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-504513
--- PASS: TestAddons/StoppedEnableDisable (12.47s)

                                                
                                    
x
+
TestCertOptions (34.68s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-758659 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-758659 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (31.945540125s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-758659 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-758659 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-758659 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-758659" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-758659
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-758659: (2.032723622s)
--- PASS: TestCertOptions (34.68s)

                                                
                                    
x
+
TestCertExpiration (241.89s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-411302 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-411302 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.585008498s)
E1007 12:57:33.792158 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-411302 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-411302 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (16.908014417s)
helpers_test.go:175: Cleaning up "cert-expiration-411302" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-411302
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-411302: (2.400905503s)
--- PASS: TestCertExpiration (241.89s)

                                                
                                    
x
+
TestForceSystemdFlag (35.3s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-034684 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-034684 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.72577003s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-034684 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-034684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-034684
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-034684: (2.293444031s)
--- PASS: TestForceSystemdFlag (35.30s)

                                                
                                    
x
+
TestForceSystemdEnv (36.55s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-122806 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-122806 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.174343272s)
helpers_test.go:175: Cleaning up "force-systemd-env-122806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-122806
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-122806: (2.372502365s)
--- PASS: TestForceSystemdEnv (36.55s)

                                                
                                    
x
+
TestErrorSpam/setup (31.77s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-217468 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-217468 --driver=docker  --container-runtime=crio
E1007 12:16:31.241917 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:16:31.248259 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:16:31.259566 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:16:31.280894 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:16:31.322216 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:16:31.403547 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:16:31.564960 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:16:31.886529 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:16:32.528476 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:16:33.809807 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:16:36.371547 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-217468 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-217468 --driver=docker  --container-runtime=crio: (31.772488426s)
--- PASS: TestErrorSpam/setup (31.77s)

                                                
                                    
x
+
TestErrorSpam/start (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217468 --log_dir /tmp/nospam-217468 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217468 --log_dir /tmp/nospam-217468 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217468 --log_dir /tmp/nospam-217468 start --dry-run
--- PASS: TestErrorSpam/start (0.82s)

                                                
                                    
x
+
TestErrorSpam/status (1.05s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217468 --log_dir /tmp/nospam-217468 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217468 --log_dir /tmp/nospam-217468 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217468 --log_dir /tmp/nospam-217468 status
--- PASS: TestErrorSpam/status (1.05s)

                                                
                                    
x
+
TestErrorSpam/pause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217468 --log_dir /tmp/nospam-217468 pause
E1007 12:16:41.493586 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217468 --log_dir /tmp/nospam-217468 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217468 --log_dir /tmp/nospam-217468 pause
--- PASS: TestErrorSpam/pause (1.85s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217468 --log_dir /tmp/nospam-217468 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217468 --log_dir /tmp/nospam-217468 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217468 --log_dir /tmp/nospam-217468 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217468 --log_dir /tmp/nospam-217468 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-217468 --log_dir /tmp/nospam-217468 stop: (1.390361861s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217468 --log_dir /tmp/nospam-217468 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217468 --log_dir /tmp/nospam-217468 stop
--- PASS: TestErrorSpam/stop (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19763-1173066/.minikube/files/etc/test/nested/copy/1178462/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.53s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-809471 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1007 12:16:51.735465 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:17:12.216914 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:17:53.178303 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-809471 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m21.529677112s)
--- PASS: TestFunctional/serial/StartWithProxy (81.53s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (16.37s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1007 12:18:12.399108 1178462 config.go:182] Loaded profile config "functional-809471": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-809471 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-809471 --alsologtostderr -v=8: (16.373755968s)
functional_test.go:663: soft start took 16.374295228s for "functional-809471" cluster.
I1007 12:18:28.773177 1178462 config.go:182] Loaded profile config "functional-809471": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (16.37s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-809471 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-809471 cache add registry.k8s.io/pause:3.1: (1.602676811s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-809471 cache add registry.k8s.io/pause:3.3: (1.488688295s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-809471 cache add registry.k8s.io/pause:latest: (1.40112964s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-809471 /tmp/TestFunctionalserialCacheCmdcacheadd_local2121312359/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 cache add minikube-local-cache-test:functional-809471
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 cache delete minikube-local-cache-test:functional-809471
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-809471
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-809471 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (283.432101ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-809471 cache reload: (1.023164912s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 kubectl -- --context functional-809471 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-809471 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.99s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-809471 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1007 12:19:15.103174 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-809471 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.99393006s)
functional_test.go:761: restart took 42.994033706s for "functional-809471" cluster.
I1007 12:19:20.554756 1178462 config.go:182] Loaded profile config "functional-809471": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (42.99s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-809471 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-809471 logs: (1.649728297s)
--- PASS: TestFunctional/serial/LogsCmd (1.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 logs --file /tmp/TestFunctionalserialLogsFileCmd3418972491/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-809471 logs --file /tmp/TestFunctionalserialLogsFileCmd3418972491/001/logs.txt: (1.705979438s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.93s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-809471 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-809471
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-809471: exit status 115 (636.642902ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.58.2:31753 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-809471 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-809471 delete -f testdata/invalidsvc.yaml: (1.038878367s)
--- PASS: TestFunctional/serial/InvalidService (4.93s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-809471 config get cpus: exit status 14 (77.88859ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-809471 config get cpus: exit status 14 (76.874891ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-809471 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-809471 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1211357: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.70s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-809471 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-809471 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (182.234294ms)

                                                
                                                
-- stdout --
	* [functional-809471] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19763-1173066/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1173066/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:20:03.442022 1211120 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:20:03.442227 1211120 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:20:03.442239 1211120 out.go:358] Setting ErrFile to fd 2...
	I1007 12:20:03.442260 1211120 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:20:03.442698 1211120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1173066/.minikube/bin
	I1007 12:20:03.443181 1211120 out.go:352] Setting JSON to false
	I1007 12:20:03.444341 1211120 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28948,"bootTime":1728274656,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1007 12:20:03.444442 1211120 start.go:139] virtualization:  
	I1007 12:20:03.447122 1211120 out.go:177] * [functional-809471] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 12:20:03.449971 1211120 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 12:20:03.450132 1211120 notify.go:220] Checking for updates...
	I1007 12:20:03.453655 1211120 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:20:03.455617 1211120 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-1173066/kubeconfig
	I1007 12:20:03.457822 1211120 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1173066/.minikube
	I1007 12:20:03.459532 1211120 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 12:20:03.461742 1211120 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:20:03.464162 1211120 config.go:182] Loaded profile config "functional-809471": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:20:03.464807 1211120 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:20:03.490939 1211120 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 12:20:03.491067 1211120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:20:03.547006 1211120 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:62 SystemTime:2024-10-07 12:20:03.535472809 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:20:03.547120 1211120 docker.go:318] overlay module found
	I1007 12:20:03.552346 1211120 out.go:177] * Using the docker driver based on existing profile
	I1007 12:20:03.554502 1211120 start.go:297] selected driver: docker
	I1007 12:20:03.554527 1211120 start.go:901] validating driver "docker" against &{Name:functional-809471 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-809471 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:20:03.554653 1211120 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:20:03.557236 1211120 out.go:201] 
	W1007 12:20:03.559287 1211120 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1007 12:20:03.561115 1211120 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-809471 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-809471 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-809471 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (197.446043ms)

                                                
                                                
-- stdout --
	* [functional-809471] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19763-1173066/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1173066/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:20:03.249465 1211076 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:20:03.249642 1211076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:20:03.249667 1211076 out.go:358] Setting ErrFile to fd 2...
	I1007 12:20:03.249685 1211076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:20:03.250080 1211076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1173066/.minikube/bin
	I1007 12:20:03.250507 1211076 out.go:352] Setting JSON to false
	I1007 12:20:03.251525 1211076 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28948,"bootTime":1728274656,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1007 12:20:03.251604 1211076 start.go:139] virtualization:  
	I1007 12:20:03.260814 1211076 out.go:177] * [functional-809471] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1007 12:20:03.263716 1211076 notify.go:220] Checking for updates...
	I1007 12:20:03.265933 1211076 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 12:20:03.268685 1211076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:20:03.270548 1211076 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-1173066/kubeconfig
	I1007 12:20:03.272446 1211076 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1173066/.minikube
	I1007 12:20:03.275162 1211076 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 12:20:03.277351 1211076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:20:03.280037 1211076 config.go:182] Loaded profile config "functional-809471": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:20:03.280610 1211076 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:20:03.308440 1211076 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 12:20:03.308566 1211076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:20:03.366269 1211076 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:62 SystemTime:2024-10-07 12:20:03.355790174 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:20:03.366376 1211076 docker.go:318] overlay module found
	I1007 12:20:03.369873 1211076 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1007 12:20:03.371696 1211076 start.go:297] selected driver: docker
	I1007 12:20:03.371718 1211076 start.go:901] validating driver "docker" against &{Name:functional-809471 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-809471 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:20:03.371839 1211076 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:20:03.374636 1211076 out.go:201] 
	W1007 12:20:03.376504 1211076 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1007 12:20:03.378539 1211076 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-809471 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-809471 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-vvdqn" [3017aa67-f8bf-42bc-b868-c2ac73a04ed9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-vvdqn" [3017aa67-f8bf-42bc-b868-c2ac73a04ed9] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.006256677s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.58.2:30632
functional_test.go:1675: http://192.168.58.2:30632: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-vvdqn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.58.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.58.2:30632
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.88s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [50214cfb-8228-49ca-9d40-3e9b71f68218] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003785068s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-809471 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-809471 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-809471 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-809471 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1fc55e73-ef76-4886-87c0-c7192952ceb2] Pending
helpers_test.go:344: "sp-pod" [1fc55e73-ef76-4886-87c0-c7192952ceb2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1fc55e73-ef76-4886-87c0-c7192952ceb2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004039899s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-809471 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-809471 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-809471 delete -f testdata/storage-provisioner/pod.yaml: (1.25840514s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-809471 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cf523833-c396-4c79-8f93-ee6fe32d25bf] Pending
helpers_test.go:344: "sp-pod" [cf523833-c396-4c79-8f93-ee6fe32d25bf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003800103s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-809471 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.24s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh -n functional-809471 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 cp functional-809471:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3401599581/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh -n functional-809471 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh -n functional-809471 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1178462/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh "sudo cat /etc/test/nested/copy/1178462/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1178462.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh "sudo cat /etc/ssl/certs/1178462.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1178462.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh "sudo cat /usr/share/ca-certificates/1178462.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/11784622.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh "sudo cat /etc/ssl/certs/11784622.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/11784622.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh "sudo cat /usr/share/ca-certificates/11784622.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-809471 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-809471 ssh "sudo systemctl is-active docker": exit status 1 (335.42808ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-809471 ssh "sudo systemctl is-active containerd": exit status 1 (372.082364ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-809471 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-809471 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-809471 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1209025: os: process already finished
helpers_test.go:502: unable to terminate pid 1208846: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-809471 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-809471 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-809471 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c129d33e-e554-4778-810e-ebc9d932a22d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c129d33e-e554-4778-810e-ebc9d932a22d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003937173s
I1007 12:19:39.199690 1178462 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-809471 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.205.189 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-809471 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-809471 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-809471 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-sn775" [3f689f04-21ca-461d-abb0-51c24023b200] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-sn775" [3f689f04-21ca-461d-abb0-51c24023b200] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.00501292s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "418.171408ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "59.154815ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "416.49484ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "110.145346ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 service list -o json
functional_test.go:1494: Took "663.03438ms" to run "out/minikube-linux-arm64 -p functional-809471 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-809471 /tmp/TestFunctionalparallelMountCmdany-port2514825522/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728303599907983698" to /tmp/TestFunctionalparallelMountCmdany-port2514825522/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728303599907983698" to /tmp/TestFunctionalparallelMountCmdany-port2514825522/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728303599907983698" to /tmp/TestFunctionalparallelMountCmdany-port2514825522/001/test-1728303599907983698
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-809471 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (831.876763ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1007 12:20:00.744747 1178462 retry.go:31] will retry after 497.886253ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  7 12:19 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  7 12:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  7 12:19 test-1728303599907983698
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh cat /mount-9p/test-1728303599907983698
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-809471 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9a75e696-4ea8-42ee-ac1a-3ce97475d8f0] Pending
helpers_test.go:344: "busybox-mount" [9a75e696-4ea8-42ee-ac1a-3ce97475d8f0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9a75e696-4ea8-42ee-ac1a-3ce97475d8f0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9a75e696-4ea8-42ee-ac1a-3ce97475d8f0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003891309s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-809471 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-809471 /tmp/TestFunctionalparallelMountCmdany-port2514825522/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.58.2:30828
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.58.2:30828
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-809471 /tmp/TestFunctionalparallelMountCmdspecific-port4152194625/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-809471 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (476.022686ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1007 12:20:10.211705 1178462 retry.go:31] will retry after 696.827004ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-809471 /tmp/TestFunctionalparallelMountCmdspecific-port4152194625/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-809471 ssh "sudo umount -f /mount-9p": exit status 1 (386.127921ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-809471 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-809471 /tmp/TestFunctionalparallelMountCmdspecific-port4152194625/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-809471 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3088786007/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-809471 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3088786007/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-809471 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3088786007/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-809471 ssh "findmnt -T" /mount1: (1.116788442s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-809471 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-809471 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3088786007/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-809471 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3088786007/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-809471 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3088786007/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-809471 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-809471
localhost/kicbase/echo-server:functional-809471
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-809471 image ls --format short --alsologtostderr:
I1007 12:20:22.764938 1213876 out.go:345] Setting OutFile to fd 1 ...
I1007 12:20:22.765148 1213876 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:20:22.765169 1213876 out.go:358] Setting ErrFile to fd 2...
I1007 12:20:22.765188 1213876 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:20:22.765507 1213876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1173066/.minikube/bin
I1007 12:20:22.766188 1213876 config.go:182] Loaded profile config "functional-809471": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 12:20:22.766344 1213876 config.go:182] Loaded profile config "functional-809471": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 12:20:22.766841 1213876 cli_runner.go:164] Run: docker container inspect functional-809471 --format={{.State.Status}}
I1007 12:20:22.791206 1213876 ssh_runner.go:195] Run: systemctl --version
I1007 12:20:22.791261 1213876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-809471
I1007 12:20:22.818114 1213876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34257 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/functional-809471/id_rsa Username:docker}
I1007 12:20:22.917370 1213876 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-809471 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-scheduler          | v1.31.1            | 7f8aa378bb47d | 67MB   |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/library/nginx                 | alpine             | 577a23b5858b9 | 52.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 24a140c548c07 | 96MB   |
| registry.k8s.io/kube-apiserver          | v1.31.1            | d3f53a98c0a9d | 92.6MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 279f381cb3736 | 86.9MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/minikube-local-cache-test     | functional-809471  | 4d234f6dca4d0 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| docker.io/library/nginx                 | latest             | 048e090385966 | 201MB  |
| localhost/kicbase/echo-server           | functional-809471  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-809471 image ls --format table --alsologtostderr:
I1007 12:20:23.359802 1214031 out.go:345] Setting OutFile to fd 1 ...
I1007 12:20:23.359981 1214031 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:20:23.359992 1214031 out.go:358] Setting ErrFile to fd 2...
I1007 12:20:23.359997 1214031 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:20:23.360286 1214031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1173066/.minikube/bin
I1007 12:20:23.360918 1214031 config.go:182] Loaded profile config "functional-809471": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 12:20:23.361033 1214031 config.go:182] Loaded profile config "functional-809471": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 12:20:23.361502 1214031 cli_runner.go:164] Run: docker container inspect functional-809471 --format={{.State.Status}}
I1007 12:20:23.382829 1214031 ssh_runner.go:195] Run: systemctl --version
I1007 12:20:23.382891 1214031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-809471
I1007 12:20:23.405481 1214031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34257 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/functional-809471/id_rsa Username:docker}
I1007 12:20:23.504649 1214031 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-809471 image ls --format json --alsologtostderr:
[{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74
cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-809471"],"size":"4788229"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690","registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67007814"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f
0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":["docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250","docker.io/library/nginx@sha256:d1f949a77b81762af560a6e8f3f2bc2817f1c575ede5a756749e3c5d459e6478"],"repoTags":["docker.io/library/nginx:alpine"],"size":"52254450"},{"id":"4d234f6dca4d04b0c1c4ebeafcbfefa6d86aa4b14828cfe84ae7cd09e68f6927","repoDigests":["localhost/minikube-local-cache-test@sha256:d58239e102197289be0c3f6b2689888563c78759c
3fe6b4f2a88954a41b6fbc6"],"repoTags":["localhost/minikube-local-cache-test:functional-809471"],"size":"3330"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb","registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"92632544"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"86930758"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90f
c4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"95951255"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce2
06e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9","repoDigests":["docker.io/library/nginx@sha256:96c43ba316370e0c1d1810b9693e647cc62a172a842d77888c299f3944922491","docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"200984127"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@
sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-809471 image ls --format json --alsologtostderr:
I1007 12:20:23.068126 1213946 out.go:345] Setting OutFile to fd 1 ...
I1007 12:20:23.068541 1213946 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:20:23.068565 1213946 out.go:358] Setting ErrFile to fd 2...
I1007 12:20:23.068572 1213946 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:20:23.068930 1213946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1173066/.minikube/bin
I1007 12:20:23.069772 1213946 config.go:182] Loaded profile config "functional-809471": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 12:20:23.069950 1213946 config.go:182] Loaded profile config "functional-809471": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 12:20:23.070616 1213946 cli_runner.go:164] Run: docker container inspect functional-809471 --format={{.State.Status}}
I1007 12:20:23.109101 1213946 ssh_runner.go:195] Run: systemctl --version
I1007 12:20:23.109159 1213946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-809471
I1007 12:20:23.139788 1213946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34257 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/functional-809471/id_rsa Username:docker}
I1007 12:20:23.237749 1213946 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-809471 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "95951255"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67007814"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-809471
size: "4788229"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
- registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "92632544"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests:
- docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
- docker.io/library/nginx@sha256:d1f949a77b81762af560a6e8f3f2bc2817f1c575ede5a756749e3c5d459e6478
repoTags:
- docker.io/library/nginx:alpine
size: "52254450"
- id: 4d234f6dca4d04b0c1c4ebeafcbfefa6d86aa4b14828cfe84ae7cd09e68f6927
repoDigests:
- localhost/minikube-local-cache-test@sha256:d58239e102197289be0c3f6b2689888563c78759c3fe6b4f2a88954a41b6fbc6
repoTags:
- localhost/minikube-local-cache-test:functional-809471
size: "3330"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9
repoDigests:
- docker.io/library/nginx@sha256:96c43ba316370e0c1d1810b9693e647cc62a172a842d77888c299f3944922491
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "200984127"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "86930758"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-809471 image ls --format yaml --alsologtostderr:
I1007 12:20:22.759056 1213877 out.go:345] Setting OutFile to fd 1 ...
I1007 12:20:22.759266 1213877 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:20:22.759275 1213877 out.go:358] Setting ErrFile to fd 2...
I1007 12:20:22.759279 1213877 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:20:22.759556 1213877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1173066/.minikube/bin
I1007 12:20:22.760241 1213877 config.go:182] Loaded profile config "functional-809471": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 12:20:22.760469 1213877 config.go:182] Loaded profile config "functional-809471": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 12:20:22.760987 1213877 cli_runner.go:164] Run: docker container inspect functional-809471 --format={{.State.Status}}
I1007 12:20:22.779712 1213877 ssh_runner.go:195] Run: systemctl --version
I1007 12:20:22.779780 1213877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-809471
I1007 12:20:22.800565 1213877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34257 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/functional-809471/id_rsa Username:docker}
I1007 12:20:22.892904 1213877 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-809471 ssh pgrep buildkitd: exit status 1 (341.387963ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 image build -t localhost/my-image:functional-809471 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-809471 image build -t localhost/my-image:functional-809471 testdata/build --alsologtostderr: (3.083999818s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-809471 image build -t localhost/my-image:functional-809471 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0c796bd64c5
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-809471
--> a4c9d428c72
Successfully tagged localhost/my-image:functional-809471
a4c9d428c729959f6e654d23530dc4611962ed8943c7ddff6eea6d587f6a8aed
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-809471 image build -t localhost/my-image:functional-809471 testdata/build --alsologtostderr:
I1007 12:20:23.363279 1214035 out.go:345] Setting OutFile to fd 1 ...
I1007 12:20:23.363842 1214035 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:20:23.363877 1214035 out.go:358] Setting ErrFile to fd 2...
I1007 12:20:23.363900 1214035 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:20:23.364175 1214035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1173066/.minikube/bin
I1007 12:20:23.364965 1214035 config.go:182] Loaded profile config "functional-809471": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 12:20:23.365583 1214035 config.go:182] Loaded profile config "functional-809471": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 12:20:23.366180 1214035 cli_runner.go:164] Run: docker container inspect functional-809471 --format={{.State.Status}}
I1007 12:20:23.391241 1214035 ssh_runner.go:195] Run: systemctl --version
I1007 12:20:23.391292 1214035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-809471
I1007 12:20:23.418372 1214035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34257 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/functional-809471/id_rsa Username:docker}
I1007 12:20:23.521411 1214035 build_images.go:161] Building image from path: /tmp/build.947921607.tar
I1007 12:20:23.521478 1214035 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1007 12:20:23.532817 1214035 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.947921607.tar
I1007 12:20:23.537202 1214035 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.947921607.tar: stat -c "%s %y" /var/lib/minikube/build/build.947921607.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.947921607.tar': No such file or directory
I1007 12:20:23.537231 1214035 ssh_runner.go:362] scp /tmp/build.947921607.tar --> /var/lib/minikube/build/build.947921607.tar (3072 bytes)
I1007 12:20:23.566231 1214035 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.947921607
I1007 12:20:23.577833 1214035 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.947921607 -xf /var/lib/minikube/build/build.947921607.tar
I1007 12:20:23.587952 1214035 crio.go:315] Building image: /var/lib/minikube/build/build.947921607
I1007 12:20:23.588055 1214035 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-809471 /var/lib/minikube/build/build.947921607 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1007 12:20:26.348072 1214035 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-809471 /var/lib/minikube/build/build.947921607 --cgroup-manager=cgroupfs: (2.759975161s)
I1007 12:20:26.348142 1214035 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.947921607
I1007 12:20:26.357128 1214035 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.947921607.tar
I1007 12:20:26.365875 1214035 build_images.go:217] Built localhost/my-image:functional-809471 from /tmp/build.947921607.tar
I1007 12:20:26.365910 1214035 build_images.go:133] succeeded building to: functional-809471
I1007 12:20:26.365915 1214035 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-809471
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 image load --daemon kicbase/echo-server:functional-809471 --alsologtostderr
2024/10/07 12:20:16 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-809471 image load --daemon kicbase/echo-server:functional-809471 --alsologtostderr: (1.261234445s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 image load --daemon kicbase/echo-server:functional-809471 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-809471
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 image load --daemon kicbase/echo-server:functional-809471 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 image save kicbase/echo-server:functional-809471 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 image rm kicbase/echo-server:functional-809471 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-809471
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-809471 image save --daemon kicbase/echo-server:functional-809471 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-809471
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-809471
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-809471
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-809471
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (173.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-600773 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1007 12:21:31.242330 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:21:58.945043 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-600773 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m53.00338276s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (173.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-600773 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-600773 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-600773 -- rollout status deployment/busybox: (6.49988796s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-600773 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-600773 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-600773 -- exec busybox-7dff88458-4k82z -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-600773 -- exec busybox-7dff88458-jdnkg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-600773 -- exec busybox-7dff88458-krzk6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-600773 -- exec busybox-7dff88458-4k82z -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-600773 -- exec busybox-7dff88458-jdnkg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-600773 -- exec busybox-7dff88458-krzk6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-600773 -- exec busybox-7dff88458-4k82z -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-600773 -- exec busybox-7dff88458-jdnkg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-600773 -- exec busybox-7dff88458-krzk6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-600773 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-600773 -- exec busybox-7dff88458-4k82z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-600773 -- exec busybox-7dff88458-4k82z -- sh -c "ping -c 1 192.168.58.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-600773 -- exec busybox-7dff88458-jdnkg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-600773 -- exec busybox-7dff88458-jdnkg -- sh -c "ping -c 1 192.168.58.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-600773 -- exec busybox-7dff88458-krzk6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-600773 -- exec busybox-7dff88458-krzk6 -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (34.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-600773 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-600773 -v=7 --alsologtostderr: (33.494359356s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (34.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-600773 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.024196303s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 cp testdata/cp-test.txt ha-600773:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 cp ha-600773:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1049508879/001/cp-test_ha-600773.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 cp ha-600773:/home/docker/cp-test.txt ha-600773-m02:/home/docker/cp-test_ha-600773_ha-600773-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m02 "sudo cat /home/docker/cp-test_ha-600773_ha-600773-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 cp ha-600773:/home/docker/cp-test.txt ha-600773-m03:/home/docker/cp-test_ha-600773_ha-600773-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m03 "sudo cat /home/docker/cp-test_ha-600773_ha-600773-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 cp ha-600773:/home/docker/cp-test.txt ha-600773-m04:/home/docker/cp-test_ha-600773_ha-600773-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m04 "sudo cat /home/docker/cp-test_ha-600773_ha-600773-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 cp testdata/cp-test.txt ha-600773-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 cp ha-600773-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1049508879/001/cp-test_ha-600773-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 cp ha-600773-m02:/home/docker/cp-test.txt ha-600773:/home/docker/cp-test_ha-600773-m02_ha-600773.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773 "sudo cat /home/docker/cp-test_ha-600773-m02_ha-600773.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 cp ha-600773-m02:/home/docker/cp-test.txt ha-600773-m03:/home/docker/cp-test_ha-600773-m02_ha-600773-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m03 "sudo cat /home/docker/cp-test_ha-600773-m02_ha-600773-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 cp ha-600773-m02:/home/docker/cp-test.txt ha-600773-m04:/home/docker/cp-test_ha-600773-m02_ha-600773-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m04 "sudo cat /home/docker/cp-test_ha-600773-m02_ha-600773-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 cp testdata/cp-test.txt ha-600773-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 cp ha-600773-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1049508879/001/cp-test_ha-600773-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 cp ha-600773-m03:/home/docker/cp-test.txt ha-600773:/home/docker/cp-test_ha-600773-m03_ha-600773.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773 "sudo cat /home/docker/cp-test_ha-600773-m03_ha-600773.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 cp ha-600773-m03:/home/docker/cp-test.txt ha-600773-m02:/home/docker/cp-test_ha-600773-m03_ha-600773-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m02 "sudo cat /home/docker/cp-test_ha-600773-m03_ha-600773-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 cp ha-600773-m03:/home/docker/cp-test.txt ha-600773-m04:/home/docker/cp-test_ha-600773-m03_ha-600773-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m04 "sudo cat /home/docker/cp-test_ha-600773-m03_ha-600773-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 cp testdata/cp-test.txt ha-600773-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 cp ha-600773-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1049508879/001/cp-test_ha-600773-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 cp ha-600773-m04:/home/docker/cp-test.txt ha-600773:/home/docker/cp-test_ha-600773-m04_ha-600773.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773 "sudo cat /home/docker/cp-test_ha-600773-m04_ha-600773.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 cp ha-600773-m04:/home/docker/cp-test.txt ha-600773-m02:/home/docker/cp-test_ha-600773-m04_ha-600773-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m02 "sudo cat /home/docker/cp-test_ha-600773-m04_ha-600773-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 cp ha-600773-m04:/home/docker/cp-test.txt ha-600773-m03:/home/docker/cp-test_ha-600773-m04_ha-600773-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 ssh -n ha-600773-m03 "sudo cat /home/docker/cp-test_ha-600773-m04_ha-600773-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 node stop m02 -v=7 --alsologtostderr
E1007 12:24:30.726667 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:24:30.733174 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:24:30.744603 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:24:30.766093 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:24:30.807474 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:24:30.888944 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:24:31.050442 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:24:31.372161 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:24:32.013686 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:24:33.295115 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:24:35.857452 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-600773 node stop m02 -v=7 --alsologtostderr: (11.992843539s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 status -v=7 --alsologtostderr
E1007 12:24:40.979081 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-600773 status -v=7 --alsologtostderr: exit status 7 (745.020446ms)

                                                
                                                
-- stdout --
	ha-600773
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-600773-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-600773-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-600773-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:24:40.516087 1229795 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:24:40.516202 1229795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:24:40.516207 1229795 out.go:358] Setting ErrFile to fd 2...
	I1007 12:24:40.516212 1229795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:24:40.516551 1229795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1173066/.minikube/bin
	I1007 12:24:40.516740 1229795 out.go:352] Setting JSON to false
	I1007 12:24:40.516759 1229795 mustload.go:65] Loading cluster: ha-600773
	I1007 12:24:40.517016 1229795 notify.go:220] Checking for updates...
	I1007 12:24:40.517255 1229795 config.go:182] Loaded profile config "ha-600773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:24:40.517283 1229795 status.go:174] checking status of ha-600773 ...
	I1007 12:24:40.517905 1229795 cli_runner.go:164] Run: docker container inspect ha-600773 --format={{.State.Status}}
	I1007 12:24:40.541178 1229795 status.go:371] ha-600773 host status = "Running" (err=<nil>)
	I1007 12:24:40.541202 1229795 host.go:66] Checking if "ha-600773" exists ...
	I1007 12:24:40.541512 1229795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-600773
	I1007 12:24:40.564374 1229795 host.go:66] Checking if "ha-600773" exists ...
	I1007 12:24:40.564664 1229795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 12:24:40.564751 1229795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773
	I1007 12:24:40.587617 1229795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34262 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/ha-600773/id_rsa Username:docker}
	I1007 12:24:40.681933 1229795 ssh_runner.go:195] Run: systemctl --version
	I1007 12:24:40.686651 1229795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:24:40.698854 1229795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:24:40.760467 1229795 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:5 ContainersRunning:4 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:true NGoroutines:81 SystemTime:2024-10-07 12:24:40.749454166 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:24:40.761059 1229795 kubeconfig.go:125] found "ha-600773" server: "https://192.168.58.254:8443"
	I1007 12:24:40.761110 1229795 api_server.go:166] Checking apiserver status ...
	I1007 12:24:40.761158 1229795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:24:40.772233 1229795 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1414/cgroup
	I1007 12:24:40.782198 1229795 api_server.go:182] apiserver freezer: "4:freezer:/docker/82aa0f339f38d1d3c2254427bd3b1a4bb8da8b165c52c4ff811edb03a807c9f5/crio/crio-bc1a3e94cb7a9e544c4868c5539cb41d73f9f7b919d764f93630ca5354f879bf"
	I1007 12:24:40.782270 1229795 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/82aa0f339f38d1d3c2254427bd3b1a4bb8da8b165c52c4ff811edb03a807c9f5/crio/crio-bc1a3e94cb7a9e544c4868c5539cb41d73f9f7b919d764f93630ca5354f879bf/freezer.state
	I1007 12:24:40.791050 1229795 api_server.go:204] freezer state: "THAWED"
	I1007 12:24:40.791081 1229795 api_server.go:253] Checking apiserver healthz at https://192.168.58.254:8443/healthz ...
	I1007 12:24:40.799550 1229795 api_server.go:279] https://192.168.58.254:8443/healthz returned 200:
	ok
	I1007 12:24:40.799576 1229795 status.go:463] ha-600773 apiserver status = Running (err=<nil>)
	I1007 12:24:40.799589 1229795 status.go:176] ha-600773 status: &{Name:ha-600773 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 12:24:40.799632 1229795 status.go:174] checking status of ha-600773-m02 ...
	I1007 12:24:40.799945 1229795 cli_runner.go:164] Run: docker container inspect ha-600773-m02 --format={{.State.Status}}
	I1007 12:24:40.817205 1229795 status.go:371] ha-600773-m02 host status = "Stopped" (err=<nil>)
	I1007 12:24:40.817226 1229795 status.go:384] host is not running, skipping remaining checks
	I1007 12:24:40.817233 1229795 status.go:176] ha-600773-m02 status: &{Name:ha-600773-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 12:24:40.817266 1229795 status.go:174] checking status of ha-600773-m03 ...
	I1007 12:24:40.817594 1229795 cli_runner.go:164] Run: docker container inspect ha-600773-m03 --format={{.State.Status}}
	I1007 12:24:40.835195 1229795 status.go:371] ha-600773-m03 host status = "Running" (err=<nil>)
	I1007 12:24:40.835217 1229795 host.go:66] Checking if "ha-600773-m03" exists ...
	I1007 12:24:40.835517 1229795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-600773-m03
	I1007 12:24:40.853960 1229795 host.go:66] Checking if "ha-600773-m03" exists ...
	I1007 12:24:40.854259 1229795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 12:24:40.854311 1229795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773-m03
	I1007 12:24:40.871086 1229795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34272 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/ha-600773-m03/id_rsa Username:docker}
	I1007 12:24:40.966461 1229795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:24:40.978952 1229795 kubeconfig.go:125] found "ha-600773" server: "https://192.168.58.254:8443"
	I1007 12:24:40.979140 1229795 api_server.go:166] Checking apiserver status ...
	I1007 12:24:40.979189 1229795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:24:40.990715 1229795 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1309/cgroup
	I1007 12:24:41.004406 1229795 api_server.go:182] apiserver freezer: "4:freezer:/docker/76d293ec9fdaf70e0d8fdb30906207a789cf9f09237f459dcaca9d8a0a768706/crio/crio-22ecf290ae2b278ef6dfc35b9ee52014b742413c40be2694632f91cbf86a71d6"
	I1007 12:24:41.004490 1229795 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/76d293ec9fdaf70e0d8fdb30906207a789cf9f09237f459dcaca9d8a0a768706/crio/crio-22ecf290ae2b278ef6dfc35b9ee52014b742413c40be2694632f91cbf86a71d6/freezer.state
	I1007 12:24:41.014432 1229795 api_server.go:204] freezer state: "THAWED"
	I1007 12:24:41.014464 1229795 api_server.go:253] Checking apiserver healthz at https://192.168.58.254:8443/healthz ...
	I1007 12:24:41.022306 1229795 api_server.go:279] https://192.168.58.254:8443/healthz returned 200:
	ok
	I1007 12:24:41.022336 1229795 status.go:463] ha-600773-m03 apiserver status = Running (err=<nil>)
	I1007 12:24:41.022346 1229795 status.go:176] ha-600773-m03 status: &{Name:ha-600773-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 12:24:41.022363 1229795 status.go:174] checking status of ha-600773-m04 ...
	I1007 12:24:41.022678 1229795 cli_runner.go:164] Run: docker container inspect ha-600773-m04 --format={{.State.Status}}
	I1007 12:24:41.039106 1229795 status.go:371] ha-600773-m04 host status = "Running" (err=<nil>)
	I1007 12:24:41.039132 1229795 host.go:66] Checking if "ha-600773-m04" exists ...
	I1007 12:24:41.039419 1229795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-600773-m04
	I1007 12:24:41.058696 1229795 host.go:66] Checking if "ha-600773-m04" exists ...
	I1007 12:24:41.059002 1229795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 12:24:41.059049 1229795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-600773-m04
	I1007 12:24:41.078506 1229795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34277 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/ha-600773-m04/id_rsa Username:docker}
	I1007 12:24:41.177637 1229795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:24:41.190105 1229795 status.go:176] ha-600773-m04 status: &{Name:ha-600773-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (22.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 node start m02 -v=7 --alsologtostderr
E1007 12:24:51.221104 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-600773 node start m02 -v=7 --alsologtostderr: (21.271083345s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-600773 status -v=7 --alsologtostderr: (1.471997964s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (22.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.472869161s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (209.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-600773 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-600773 -v=7 --alsologtostderr
E1007 12:25:11.703356 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-600773 -v=7 --alsologtostderr: (37.155172433s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-600773 --wait=true -v=7 --alsologtostderr
E1007 12:25:52.665491 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:26:31.242096 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:27:14.587361 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-600773 --wait=true -v=7 --alsologtostderr: (2m51.973057226s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-600773
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (209.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-600773 node delete m03 -v=7 --alsologtostderr: (11.261292327s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-600773 stop -v=7 --alsologtostderr: (35.791672194s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-600773 status -v=7 --alsologtostderr: exit status 7 (117.963051ms)

                                                
                                                
-- stdout --
	ha-600773
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-600773-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-600773-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:29:24.444893 1244365 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:29:24.445058 1244365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:29:24.445068 1244365 out.go:358] Setting ErrFile to fd 2...
	I1007 12:29:24.445073 1244365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:29:24.445329 1244365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1173066/.minikube/bin
	I1007 12:29:24.445517 1244365 out.go:352] Setting JSON to false
	I1007 12:29:24.445543 1244365 mustload.go:65] Loading cluster: ha-600773
	I1007 12:29:24.445683 1244365 notify.go:220] Checking for updates...
	I1007 12:29:24.445959 1244365 config.go:182] Loaded profile config "ha-600773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:29:24.445978 1244365 status.go:174] checking status of ha-600773 ...
	I1007 12:29:24.446553 1244365 cli_runner.go:164] Run: docker container inspect ha-600773 --format={{.State.Status}}
	I1007 12:29:24.463755 1244365 status.go:371] ha-600773 host status = "Stopped" (err=<nil>)
	I1007 12:29:24.463779 1244365 status.go:384] host is not running, skipping remaining checks
	I1007 12:29:24.463788 1244365 status.go:176] ha-600773 status: &{Name:ha-600773 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 12:29:24.463816 1244365 status.go:174] checking status of ha-600773-m02 ...
	I1007 12:29:24.464121 1244365 cli_runner.go:164] Run: docker container inspect ha-600773-m02 --format={{.State.Status}}
	I1007 12:29:24.490142 1244365 status.go:371] ha-600773-m02 host status = "Stopped" (err=<nil>)
	I1007 12:29:24.490169 1244365 status.go:384] host is not running, skipping remaining checks
	I1007 12:29:24.490178 1244365 status.go:176] ha-600773-m02 status: &{Name:ha-600773-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 12:29:24.490198 1244365 status.go:174] checking status of ha-600773-m04 ...
	I1007 12:29:24.490505 1244365 cli_runner.go:164] Run: docker container inspect ha-600773-m04 --format={{.State.Status}}
	I1007 12:29:24.507998 1244365 status.go:371] ha-600773-m04 host status = "Stopped" (err=<nil>)
	I1007 12:29:24.508023 1244365 status.go:384] host is not running, skipping remaining checks
	I1007 12:29:24.508032 1244365 status.go:176] ha-600773-m04 status: &{Name:ha-600773-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (70.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-600773 --control-plane -v=7 --alsologtostderr
E1007 12:32:54.306729 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-600773 --control-plane -v=7 --alsologtostderr: (1m9.972158623s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-600773 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (70.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.97s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-696576 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-696576 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (47.829453499s)
--- PASS: TestJSONOutput/start/Command (47.84s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-696576 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-696576 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-696576 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-696576 --output=json --user=testUser: (5.867347085s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-068343 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-068343 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.205666ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"575f2abe-3cae-424c-bd89-00c764f5b160","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-068343] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a469d29-87c8-45a1-9ce3-f87113aa5718","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19763"}}
	{"specversion":"1.0","id":"07ee61ee-c0c3-454b-999e-e956d536fc59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ff237dc7-0fbc-4813-a413-4443cfeb809b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19763-1173066/kubeconfig"}}
	{"specversion":"1.0","id":"d4c61f4a-5639-43c7-a1dc-00d106ac7e8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1173066/.minikube"}}
	{"specversion":"1.0","id":"291c72b6-fc69-4ce8-9df0-64db7234c82b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"00a5fd07-1105-4044-86d3-3910b2e37e65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e3dedb65-a1c6-4444-b413-b3172aaef506","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-068343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-068343
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.27s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-587515 --network=
E1007 12:34:30.726624 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-587515 --network=: (40.165841112s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-587515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-587515
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-587515: (2.08293274s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.27s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-131340 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-131340 --network=bridge: (34.359090698s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-131340" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-131340
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-131340: (1.941604515s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.33s)

                                                
                                    
x
+
TestKicExistingNetwork (30.73s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1007 12:35:27.603270 1178462 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1007 12:35:27.618373 1178462 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1007 12:35:27.619617 1178462 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1007 12:35:27.620300 1178462 cli_runner.go:164] Run: docker network inspect existing-network
W1007 12:35:27.634457 1178462 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1007 12:35:27.634489 1178462 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1007 12:35:27.634504 1178462 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1007 12:35:27.635171 1178462 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1007 12:35:27.654871 1178462 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fa98f111c271 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:cf:52:8b:17} reservation:<nil>}
I1007 12:35:27.658056 1178462 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d2dc7c09db9b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:d6:f1:ea:68} reservation:<nil>}
I1007 12:35:27.665006 1178462 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1007 12:35:27.667341 1178462 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40013cabc0}
I1007 12:35:27.668121 1178462 network_create.go:124] attempt to create docker network existing-network 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I1007 12:35:27.668861 1178462 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1007 12:35:27.741104 1178462 network_create.go:108] docker network existing-network 192.168.76.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-130463 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-130463 --network=existing-network: (28.62638297s)
helpers_test.go:175: Cleaning up "existing-network-130463" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-130463
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-130463: (1.928619635s)
I1007 12:35:58.312644 1178462 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (30.73s)

                                                
                                    
x
+
TestKicCustomSubnet (33.87s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-248832 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-248832 --subnet=192.168.60.0/24: (31.670679391s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-248832 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-248832" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-248832
E1007 12:36:31.241895 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-248832: (2.1655257s)
--- PASS: TestKicCustomSubnet (33.87s)

                                                
                                    
x
+
TestKicStaticIP (35.27s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-235680 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-235680 --static-ip=192.168.200.200: (33.075530382s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-235680 ip
helpers_test.go:175: Cleaning up "static-ip-235680" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-235680
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-235680: (2.041356672s)
--- PASS: TestKicStaticIP (35.27s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (66.17s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-289765 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-289765 --driver=docker  --container-runtime=crio: (30.617347624s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-292456 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-292456 --driver=docker  --container-runtime=crio: (29.92043415s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-289765
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-292456
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-292456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-292456
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-292456: (2.020224779s)
helpers_test.go:175: Cleaning up "first-289765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-289765
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-289765: (2.220681454s)
--- PASS: TestMinikubeProfile (66.17s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-714384 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-714384 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.645566376s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-714384 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-716123 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-716123 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.654034708s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-716123 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-714384 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-714384 --alsologtostderr -v=5: (1.625358444s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-716123 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-716123
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-716123: (1.207971767s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.89s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-716123
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-716123: (6.893509893s)
--- PASS: TestMountStart/serial/RestartStopped (7.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-716123 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-273255 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1007 12:39:30.726939 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-273255 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m45.620044554s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-273255 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-273255 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-273255 -- rollout status deployment/busybox: (4.231088407s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-273255 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-273255 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-273255 -- exec busybox-7dff88458-658kr -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-273255 -- exec busybox-7dff88458-6h5cc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-273255 -- exec busybox-7dff88458-658kr -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-273255 -- exec busybox-7dff88458-6h5cc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-273255 -- exec busybox-7dff88458-658kr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-273255 -- exec busybox-7dff88458-6h5cc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.18s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-273255 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-273255 -- exec busybox-7dff88458-658kr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-273255 -- exec busybox-7dff88458-658kr -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-273255 -- exec busybox-7dff88458-6h5cc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-273255 -- exec busybox-7dff88458-6h5cc -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-273255 -v 3 --alsologtostderr
E1007 12:40:53.790038 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-273255 -v 3 --alsologtostderr: (27.098418265s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.75s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-273255 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 cp testdata/cp-test.txt multinode-273255:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 ssh -n multinode-273255 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 cp multinode-273255:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1439120700/001/cp-test_multinode-273255.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 ssh -n multinode-273255 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 cp multinode-273255:/home/docker/cp-test.txt multinode-273255-m02:/home/docker/cp-test_multinode-273255_multinode-273255-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 ssh -n multinode-273255 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 ssh -n multinode-273255-m02 "sudo cat /home/docker/cp-test_multinode-273255_multinode-273255-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 cp multinode-273255:/home/docker/cp-test.txt multinode-273255-m03:/home/docker/cp-test_multinode-273255_multinode-273255-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 ssh -n multinode-273255 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 ssh -n multinode-273255-m03 "sudo cat /home/docker/cp-test_multinode-273255_multinode-273255-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 cp testdata/cp-test.txt multinode-273255-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 ssh -n multinode-273255-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 cp multinode-273255-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1439120700/001/cp-test_multinode-273255-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 ssh -n multinode-273255-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 cp multinode-273255-m02:/home/docker/cp-test.txt multinode-273255:/home/docker/cp-test_multinode-273255-m02_multinode-273255.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 ssh -n multinode-273255-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 ssh -n multinode-273255 "sudo cat /home/docker/cp-test_multinode-273255-m02_multinode-273255.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 cp multinode-273255-m02:/home/docker/cp-test.txt multinode-273255-m03:/home/docker/cp-test_multinode-273255-m02_multinode-273255-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 ssh -n multinode-273255-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 ssh -n multinode-273255-m03 "sudo cat /home/docker/cp-test_multinode-273255-m02_multinode-273255-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 cp testdata/cp-test.txt multinode-273255-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 ssh -n multinode-273255-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 cp multinode-273255-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1439120700/001/cp-test_multinode-273255-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 ssh -n multinode-273255-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 cp multinode-273255-m03:/home/docker/cp-test.txt multinode-273255:/home/docker/cp-test_multinode-273255-m03_multinode-273255.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 ssh -n multinode-273255-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 ssh -n multinode-273255 "sudo cat /home/docker/cp-test_multinode-273255-m03_multinode-273255.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 cp multinode-273255-m03:/home/docker/cp-test.txt multinode-273255-m02:/home/docker/cp-test_multinode-273255-m03_multinode-273255-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 ssh -n multinode-273255-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 ssh -n multinode-273255-m02 "sudo cat /home/docker/cp-test_multinode-273255-m03_multinode-273255-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-273255 node stop m03: (1.211745987s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-273255 status: exit status 7 (520.18116ms)

                                                
                                                
-- stdout --
	multinode-273255
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-273255-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-273255-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-273255 status --alsologtostderr: exit status 7 (504.717242ms)

                                                
                                                
-- stdout --
	multinode-273255
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-273255-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-273255-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:41:17.244645 1298825 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:41:17.244861 1298825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:41:17.244885 1298825 out.go:358] Setting ErrFile to fd 2...
	I1007 12:41:17.244906 1298825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:41:17.245165 1298825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1173066/.minikube/bin
	I1007 12:41:17.245379 1298825 out.go:352] Setting JSON to false
	I1007 12:41:17.245437 1298825 mustload.go:65] Loading cluster: multinode-273255
	I1007 12:41:17.245512 1298825 notify.go:220] Checking for updates...
	I1007 12:41:17.246680 1298825 config.go:182] Loaded profile config "multinode-273255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:41:17.246736 1298825 status.go:174] checking status of multinode-273255 ...
	I1007 12:41:17.247419 1298825 cli_runner.go:164] Run: docker container inspect multinode-273255 --format={{.State.Status}}
	I1007 12:41:17.265119 1298825 status.go:371] multinode-273255 host status = "Running" (err=<nil>)
	I1007 12:41:17.265141 1298825 host.go:66] Checking if "multinode-273255" exists ...
	I1007 12:41:17.265436 1298825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-273255
	I1007 12:41:17.286158 1298825 host.go:66] Checking if "multinode-273255" exists ...
	I1007 12:41:17.286446 1298825 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 12:41:17.286486 1298825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-273255
	I1007 12:41:17.310140 1298825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34382 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/multinode-273255/id_rsa Username:docker}
	I1007 12:41:17.401910 1298825 ssh_runner.go:195] Run: systemctl --version
	I1007 12:41:17.406401 1298825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:41:17.418662 1298825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:41:17.471419 1298825 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:71 SystemTime:2024-10-07 12:41:17.460855537 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:41:17.472032 1298825 kubeconfig.go:125] found "multinode-273255" server: "https://192.168.67.2:8443"
	I1007 12:41:17.472068 1298825 api_server.go:166] Checking apiserver status ...
	I1007 12:41:17.472114 1298825 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:41:17.483630 1298825 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1426/cgroup
	I1007 12:41:17.493142 1298825 api_server.go:182] apiserver freezer: "4:freezer:/docker/b1a774dc224f4a8ff53674fb9bc7ec224965f5c5b6f0659af6847a87f0784b8f/crio/crio-7d3cbfaf46126882a547da48fda380d101eb51b7c31069b3a1086a36d6a921d9"
	I1007 12:41:17.493219 1298825 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b1a774dc224f4a8ff53674fb9bc7ec224965f5c5b6f0659af6847a87f0784b8f/crio/crio-7d3cbfaf46126882a547da48fda380d101eb51b7c31069b3a1086a36d6a921d9/freezer.state
	I1007 12:41:17.502226 1298825 api_server.go:204] freezer state: "THAWED"
	I1007 12:41:17.502256 1298825 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1007 12:41:17.510393 1298825 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1007 12:41:17.510475 1298825 status.go:463] multinode-273255 apiserver status = Running (err=<nil>)
	I1007 12:41:17.510500 1298825 status.go:176] multinode-273255 status: &{Name:multinode-273255 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 12:41:17.510542 1298825 status.go:174] checking status of multinode-273255-m02 ...
	I1007 12:41:17.510899 1298825 cli_runner.go:164] Run: docker container inspect multinode-273255-m02 --format={{.State.Status}}
	I1007 12:41:17.528031 1298825 status.go:371] multinode-273255-m02 host status = "Running" (err=<nil>)
	I1007 12:41:17.528055 1298825 host.go:66] Checking if "multinode-273255-m02" exists ...
	I1007 12:41:17.528389 1298825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-273255-m02
	I1007 12:41:17.545481 1298825 host.go:66] Checking if "multinode-273255-m02" exists ...
	I1007 12:41:17.545787 1298825 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 12:41:17.545841 1298825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-273255-m02
	I1007 12:41:17.563559 1298825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34387 SSHKeyPath:/home/jenkins/minikube-integration/19763-1173066/.minikube/machines/multinode-273255-m02/id_rsa Username:docker}
	I1007 12:41:17.657463 1298825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:41:17.671846 1298825 status.go:176] multinode-273255-m02 status: &{Name:multinode-273255-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1007 12:41:17.671884 1298825 status.go:174] checking status of multinode-273255-m03 ...
	I1007 12:41:17.672275 1298825 cli_runner.go:164] Run: docker container inspect multinode-273255-m03 --format={{.State.Status}}
	I1007 12:41:17.689240 1298825 status.go:371] multinode-273255-m03 host status = "Stopped" (err=<nil>)
	I1007 12:41:17.689264 1298825 status.go:384] host is not running, skipping remaining checks
	I1007 12:41:17.689272 1298825 status.go:176] multinode-273255-m03 status: &{Name:multinode-273255-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-273255 node start m03 -v=7 --alsologtostderr: (8.990544859s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (111.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-273255
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-273255
E1007 12:41:31.241942 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-273255: (24.837244727s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-273255 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-273255 --wait=true -v=8 --alsologtostderr: (1m26.615978113s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-273255
--- PASS: TestMultiNode/serial/RestartKeepsNodes (111.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-273255 node delete m03: (4.853318583s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-273255 stop: (23.698786613s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-273255 status: exit status 7 (91.748375ms)

                                                
                                                
-- stdout --
	multinode-273255
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-273255-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-273255 status --alsologtostderr: exit status 7 (106.205836ms)

                                                
                                                
-- stdout --
	multinode-273255
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-273255-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:43:48.418788 1306601 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:43:48.419043 1306601 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:43:48.419081 1306601 out.go:358] Setting ErrFile to fd 2...
	I1007 12:43:48.419102 1306601 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:43:48.419447 1306601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1173066/.minikube/bin
	I1007 12:43:48.419707 1306601 out.go:352] Setting JSON to false
	I1007 12:43:48.419775 1306601 mustload.go:65] Loading cluster: multinode-273255
	I1007 12:43:48.419863 1306601 notify.go:220] Checking for updates...
	I1007 12:43:48.420303 1306601 config.go:182] Loaded profile config "multinode-273255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:43:48.420349 1306601 status.go:174] checking status of multinode-273255 ...
	I1007 12:43:48.421411 1306601 cli_runner.go:164] Run: docker container inspect multinode-273255 --format={{.State.Status}}
	I1007 12:43:48.440180 1306601 status.go:371] multinode-273255 host status = "Stopped" (err=<nil>)
	I1007 12:43:48.440200 1306601 status.go:384] host is not running, skipping remaining checks
	I1007 12:43:48.440207 1306601 status.go:176] multinode-273255 status: &{Name:multinode-273255 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 12:43:48.440237 1306601 status.go:174] checking status of multinode-273255-m02 ...
	I1007 12:43:48.440581 1306601 cli_runner.go:164] Run: docker container inspect multinode-273255-m02 --format={{.State.Status}}
	I1007 12:43:48.469255 1306601 status.go:371] multinode-273255-m02 host status = "Stopped" (err=<nil>)
	I1007 12:43:48.469276 1306601 status.go:384] host is not running, skipping remaining checks
	I1007 12:43:48.469283 1306601 status.go:176] multinode-273255-m02 status: &{Name:multinode-273255-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (60.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-273255 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1007 12:44:30.726527 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-273255 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (59.797580943s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-273255 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (60.47s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-273255
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-273255-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-273255-m02 --driver=docker  --container-runtime=crio: exit status 14 (97.489636ms)

                                                
                                                
-- stdout --
	* [multinode-273255-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19763-1173066/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1173066/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-273255-m02' is duplicated with machine name 'multinode-273255-m02' in profile 'multinode-273255'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-273255-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-273255-m03 --driver=docker  --container-runtime=crio: (32.242009193s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-273255
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-273255: exit status 80 (376.63739ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-273255 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-273255-m03 already exists in multinode-273255-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-273255-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-273255-m03: (2.013879326s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.79s)

                                                
                                    
x
+
TestPreload (126.37s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-405997 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1007 12:46:31.242020 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-405997 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m34.144071055s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-405997 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-405997 image pull gcr.io/k8s-minikube/busybox: (3.372614778s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-405997
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-405997: (5.755235874s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-405997 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-405997 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (20.664201916s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-405997 image list
helpers_test.go:175: Cleaning up "test-preload-405997" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-405997
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-405997: (2.145277439s)
--- PASS: TestPreload (126.37s)

                                                
                                    
x
+
TestScheduledStopUnix (104.36s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-838270 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-838270 --memory=2048 --driver=docker  --container-runtime=crio: (27.987724549s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-838270 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-838270 -n scheduled-stop-838270
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-838270 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1007 12:48:02.654061 1178462 retry.go:31] will retry after 139.904µs: open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/scheduled-stop-838270/pid: no such file or directory
I1007 12:48:02.654574 1178462 retry.go:31] will retry after 82.403µs: open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/scheduled-stop-838270/pid: no such file or directory
I1007 12:48:02.654854 1178462 retry.go:31] will retry after 200.288µs: open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/scheduled-stop-838270/pid: no such file or directory
I1007 12:48:02.655697 1178462 retry.go:31] will retry after 432.437µs: open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/scheduled-stop-838270/pid: no such file or directory
I1007 12:48:02.656782 1178462 retry.go:31] will retry after 354.248µs: open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/scheduled-stop-838270/pid: no such file or directory
I1007 12:48:02.657914 1178462 retry.go:31] will retry after 1.131915ms: open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/scheduled-stop-838270/pid: no such file or directory
I1007 12:48:02.660121 1178462 retry.go:31] will retry after 801.807µs: open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/scheduled-stop-838270/pid: no such file or directory
I1007 12:48:02.661214 1178462 retry.go:31] will retry after 1.94228ms: open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/scheduled-stop-838270/pid: no such file or directory
I1007 12:48:02.663403 1178462 retry.go:31] will retry after 1.495151ms: open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/scheduled-stop-838270/pid: no such file or directory
I1007 12:48:02.665621 1178462 retry.go:31] will retry after 4.683966ms: open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/scheduled-stop-838270/pid: no such file or directory
I1007 12:48:02.670929 1178462 retry.go:31] will retry after 5.84781ms: open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/scheduled-stop-838270/pid: no such file or directory
I1007 12:48:02.677677 1178462 retry.go:31] will retry after 8.020817ms: open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/scheduled-stop-838270/pid: no such file or directory
I1007 12:48:02.685863 1178462 retry.go:31] will retry after 17.103436ms: open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/scheduled-stop-838270/pid: no such file or directory
I1007 12:48:02.704125 1178462 retry.go:31] will retry after 23.913115ms: open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/scheduled-stop-838270/pid: no such file or directory
I1007 12:48:02.728755 1178462 retry.go:31] will retry after 37.877053ms: open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/scheduled-stop-838270/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-838270 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-838270 -n scheduled-stop-838270
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-838270
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-838270 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-838270
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-838270: exit status 7 (78.065464ms)

                                                
                                                
-- stdout --
	scheduled-stop-838270
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-838270 -n scheduled-stop-838270
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-838270 -n scheduled-stop-838270: exit status 7 (69.288608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-838270" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-838270
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-838270: (4.744097821s)
--- PASS: TestScheduledStopUnix (104.36s)

                                                
                                    
x
+
TestInsufficientStorage (10.78s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-961611 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-961611 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.305849581s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c712235f-35ee-4be3-ad6c-b52bc6ec5c14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-961611] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"87d8a1b1-fa91-4ecb-8604-3263948924fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19763"}}
	{"specversion":"1.0","id":"d12f9f3b-f4f0-4bf2-b909-4ee9d4008d98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6095d866-6734-4059-ba07-78e4e1226350","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19763-1173066/kubeconfig"}}
	{"specversion":"1.0","id":"201508b0-344f-467f-b443-eab6b27a43c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1173066/.minikube"}}
	{"specversion":"1.0","id":"281a9f2a-4cf1-401e-a192-4beb15e77b24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"6b36002f-a6e0-4a99-be01-a4479164a0d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7d6aa517-3dd8-4d5e-b573-994e2a84f92c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a13af8c7-8173-41b7-b353-6b511df35aad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7d2b8bac-b00c-40f4-9c90-02454f308d01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"93dbb490-d31a-4cdc-a6a1-b90dc1629445","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c7f58bfd-3cb5-4523-8714-2dec762ecf88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-961611\" primary control-plane node in \"insufficient-storage-961611\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"13ddd780-98ac-4ef0-be52-42c311fbbaea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727731891-master ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a44bf543-d3dc-4b68-b931-b44e794eca90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f91b4b60-7331-478c-b7ca-cbb453277f24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-961611 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-961611 --output=json --layout=cluster: exit status 7 (311.062498ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-961611","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-961611","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 12:49:27.085763 1323942 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-961611" does not appear in /home/jenkins/minikube-integration/19763-1173066/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-961611 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-961611 --output=json --layout=cluster: exit status 7 (290.13592ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-961611","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-961611","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 12:49:27.385976 1324004 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-961611" does not appear in /home/jenkins/minikube-integration/19763-1173066/kubeconfig
	E1007 12:49:27.397215 1324004 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/insufficient-storage-961611/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-961611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-961611
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-961611: (1.867955424s)
--- PASS: TestInsufficientStorage (10.78s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (73.09s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2208316753 start -p running-upgrade-760682 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2208316753 start -p running-upgrade-760682 --memory=2200 --vm-driver=docker  --container-runtime=crio: (34.493519687s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-760682 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1007 12:54:30.726713 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-760682 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.775121273s)
helpers_test.go:175: Cleaning up "running-upgrade-760682" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-760682
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-760682: (2.979970132s)
--- PASS: TestRunningBinaryUpgrade (73.09s)

                                                
                                    
x
+
TestKubernetesUpgrade (392.95s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-068891 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1007 12:51:31.241869 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-068891 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m14.884087871s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-068891
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-068891: (1.302016975s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-068891 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-068891 status --format={{.Host}}: exit status 7 (139.815559ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-068891 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-068891 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m40.237510589s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-068891 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-068891 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-068891 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (127.805211ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-068891] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19763-1173066/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1173066/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-068891
	    minikube start -p kubernetes-upgrade-068891 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0688912 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-068891 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-068891 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-068891 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.941821783s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-068891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-068891
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-068891: (2.186574954s)
--- PASS: TestKubernetesUpgrade (392.95s)

                                                
                                    
x
+
TestMissingContainerUpgrade (157.9s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.901855237 start -p missing-upgrade-422291 --memory=2200 --driver=docker  --container-runtime=crio
E1007 12:49:30.726407 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:49:34.308335 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.901855237 start -p missing-upgrade-422291 --memory=2200 --driver=docker  --container-runtime=crio: (1m20.719846098s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-422291
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-422291: (10.444820701s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-422291
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-422291 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-422291 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.875347318s)
helpers_test.go:175: Cleaning up "missing-upgrade-422291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-422291
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-422291: (1.993521136s)
--- PASS: TestMissingContainerUpgrade (157.90s)

                                                
                                    
x
+
TestPause/serial/Start (83.03s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-208488 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-208488 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m23.034419645s)
--- PASS: TestPause/serial/Start (83.03s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (22.38s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-208488 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-208488 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.358350776s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (22.38s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-208488 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-208488 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-208488 --output=json --layout=cluster: exit status 2 (304.143565ms)

                                                
                                                
-- stdout --
	{"Name":"pause-208488","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-208488","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-208488 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-208488 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.65s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-208488 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-208488 --alsologtostderr -v=5: (2.652248961s)
--- PASS: TestPause/serial/DeletePaused (2.65s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-208488
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-208488: exit status 1 (18.28918ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-208488: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (78.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1725696146 start -p stopped-upgrade-195905 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1725696146 start -p stopped-upgrade-195905 --memory=2200 --vm-driver=docker  --container-runtime=crio: (43.26858027s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1725696146 -p stopped-upgrade-195905 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1725696146 -p stopped-upgrade-195905 stop: (4.755209282s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-195905 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-195905 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.948970917s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (78.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-195905
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-195905: (1.024614169s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-196649 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-196649 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (121.976959ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-196649] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19763-1173066/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1173066/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (32.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-196649 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-196649 --driver=docker  --container-runtime=crio: (31.652676079s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-196649 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (32.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-196649 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-196649 --no-kubernetes --driver=docker  --container-runtime=crio: (4.611085539s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-196649 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-196649 status -o json: exit status 2 (293.890503ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-196649","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-196649
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-196649: (1.97653285s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-196649 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-196649 --no-kubernetes --driver=docker  --container-runtime=crio: (6.504158629s)
--- PASS: TestNoKubernetes/serial/Start (6.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-196649 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-196649 "sudo systemctl is-active --quiet service kubelet": exit status 1 (255.242153ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-arm64 profile list --output=json: (16.212072361s)
--- PASS: TestNoKubernetes/serial/ProfileList (16.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-196649
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-196649: (1.195551634s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-196649 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-196649 --driver=docker  --container-runtime=crio: (6.858840237s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-196649 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-196649 "sudo systemctl is-active --quiet service kubelet": exit status 1 (252.080101ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-617489 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-617489 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (154.399594ms)

                                                
                                                
-- stdout --
	* [false-617489] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19763-1173066/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1173066/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:56:36.307390 1360995 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:56:36.307520 1360995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:56:36.307530 1360995 out.go:358] Setting ErrFile to fd 2...
	I1007 12:56:36.307543 1360995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:56:36.307878 1360995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1173066/.minikube/bin
	I1007 12:56:36.308364 1360995 out.go:352] Setting JSON to false
	I1007 12:56:36.309307 1360995 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":31141,"bootTime":1728274656,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1007 12:56:36.309396 1360995 start.go:139] virtualization:  
	I1007 12:56:36.311773 1360995 out.go:177] * [false-617489] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 12:56:36.313784 1360995 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 12:56:36.313851 1360995 notify.go:220] Checking for updates...
	I1007 12:56:36.316934 1360995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:56:36.318631 1360995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-1173066/kubeconfig
	I1007 12:56:36.320192 1360995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1173066/.minikube
	I1007 12:56:36.321951 1360995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 12:56:36.323558 1360995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:56:36.325931 1360995 config.go:182] Loaded profile config "kubernetes-upgrade-068891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:56:36.326040 1360995 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:56:36.346951 1360995 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 12:56:36.347069 1360995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:56:36.397383 1360995 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-10-07 12:56:36.388054422 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:56:36.397496 1360995 docker.go:318] overlay module found
	I1007 12:56:36.399343 1360995 out.go:177] * Using the docker driver based on user configuration
	I1007 12:56:36.400809 1360995 start.go:297] selected driver: docker
	I1007 12:56:36.400824 1360995 start.go:901] validating driver "docker" against <nil>
	I1007 12:56:36.400839 1360995 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:56:36.403295 1360995 out.go:201] 
	W1007 12:56:36.404957 1360995 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1007 12:56:36.406431 1360995 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-617489 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-617489

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-617489

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-617489

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-617489

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-617489

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-617489

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-617489

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-617489

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-617489

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-617489

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-617489

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-617489" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-617489" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Oct 2024 12:52:51 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-068891
contexts:
- context:
cluster: kubernetes-upgrade-068891
user: kubernetes-upgrade-068891
name: kubernetes-upgrade-068891
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-068891
user:
client-certificate: /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/kubernetes-upgrade-068891/client.crt
client-key: /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/kubernetes-upgrade-068891/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-617489

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-617489"

                                                
                                                
----------------------- debugLogs end: false-617489 [took: 3.615754685s] --------------------------------
helpers_test.go:175: Cleaning up "false-617489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-617489
--- PASS: TestNetworkPlugins/group/false (3.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (183.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-299687 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E1007 12:59:30.726963 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-299687 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (3m3.197670051s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (183.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (60.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-047383 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1007 13:01:31.242396 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-047383 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m0.837628529s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (60.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-047383 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [33361dcd-06a1-4ebc-ad3e-e70d6c0b40c9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [33361dcd-06a1-4ebc-ad3e-e70d6c0b40c9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003820703s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-047383 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-047383 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-047383 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-047383 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-047383 --alsologtostderr -v=3: (12.19662135s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-299687 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fcec0f16-13d4-43f4-a940-8393d87284ae] Pending
helpers_test.go:344: "busybox" [fcec0f16-13d4-43f4-a940-8393d87284ae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fcec0f16-13d4-43f4-a940-8393d87284ae] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004193352s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-299687 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-047383 -n no-preload-047383
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-047383 -n no-preload-047383: exit status 7 (78.052681ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-047383 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-047383 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-047383 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m27.067860673s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-047383 -n no-preload-047383
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-299687 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-299687 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.283759137s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-299687 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-299687 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-299687 --alsologtostderr -v=3: (12.255900477s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-299687 -n old-k8s-version-299687
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-299687 -n old-k8s-version-299687: exit status 7 (98.79673ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-299687 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (138.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-299687 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E1007 13:04:30.726634 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-299687 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m18.342941554s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-299687 -n old-k8s-version-299687
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (138.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-frqhs" [4792ed1e-1f4f-40d8-ac4c-62052d1ac169] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00437962s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-frqhs" [4792ed1e-1f4f-40d8-ac4c-62052d1ac169] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010672221s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-299687 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-299687 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-299687 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-299687 -n old-k8s-version-299687
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-299687 -n old-k8s-version-299687: exit status 2 (336.931624ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-299687 -n old-k8s-version-299687
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-299687 -n old-k8s-version-299687: exit status 2 (319.901973ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-299687 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-299687 -n old-k8s-version-299687
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-299687 -n old-k8s-version-299687
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-491184 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1007 13:06:14.310499 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-491184 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m22.892211424s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (82.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-491184 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [76d4bed5-17d6-4740-a3c5-b51c5031c377] Pending
helpers_test.go:344: "busybox" [76d4bed5-17d6-4740-a3c5-b51c5031c377] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1007 13:06:31.241579 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [76d4bed5-17d6-4740-a3c5-b51c5031c377] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003590806s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-491184 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rjs84" [84794ccd-321e-4e2c-9c9e-c5ae973f2772] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003454843s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-491184 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-491184 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.040786439s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-491184 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-491184 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-491184 --alsologtostderr -v=3: (12.039842s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rjs84" [84794ccd-321e-4e2c-9c9e-c5ae973f2772] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003583671s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-047383 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-047383 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-047383 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-047383 -n no-preload-047383
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-047383 -n no-preload-047383: exit status 2 (311.664665ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-047383 -n no-preload-047383
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-047383 -n no-preload-047383: exit status 2 (311.644825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-047383 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-047383 -n no-preload-047383
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-047383 -n no-preload-047383
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-491184 -n embed-certs-491184
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-491184 -n embed-certs-491184: exit status 7 (202.277384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-491184 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (296.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-491184 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-491184 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m56.024890377s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-491184 -n embed-certs-491184
E1007 13:11:49.032780 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/no-preload-047383/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (296.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-654708 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1007 13:07:06.665768 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/old-k8s-version-299687/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:07:06.673002 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/old-k8s-version-299687/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:07:06.684984 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/old-k8s-version-299687/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:07:06.711459 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/old-k8s-version-299687/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:07:06.753459 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/old-k8s-version-299687/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:07:06.834853 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/old-k8s-version-299687/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:07:06.996702 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/old-k8s-version-299687/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:07:07.318861 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/old-k8s-version-299687/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:07:07.960566 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/old-k8s-version-299687/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:07:09.242415 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/old-k8s-version-299687/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:07:11.804498 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/old-k8s-version-299687/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:07:16.926334 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/old-k8s-version-299687/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:07:27.168463 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/old-k8s-version-299687/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:07:47.650352 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/old-k8s-version-299687/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-654708 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m26.846013356s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-654708 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3b402182-d4a1-45e5-8e18-8172c2c35f57] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1007 13:08:28.611868 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/old-k8s-version-299687/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [3b402182-d4a1-45e5-8e18-8172c2c35f57] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003889173s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-654708 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-654708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-654708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.007706963s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-654708 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-654708 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-654708 --alsologtostderr -v=3: (11.96274373s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-654708 -n default-k8s-diff-port-654708
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-654708 -n default-k8s-diff-port-654708: exit status 7 (71.731142ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-654708 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-654708 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1007 13:09:30.726855 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:09:50.533312 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/old-k8s-version-299687/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:11:31.241495 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/addons-504513/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:11:47.745398 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/no-preload-047383/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:11:47.751863 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/no-preload-047383/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:11:47.763238 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/no-preload-047383/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:11:47.784679 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/no-preload-047383/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:11:47.826119 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/no-preload-047383/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:11:47.907508 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/no-preload-047383/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:11:48.069025 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/no-preload-047383/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:11:48.390747 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/no-preload-047383/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-654708 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m26.649524399s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-654708 -n default-k8s-diff-port-654708
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-j5txn" [d095ad6d-2e18-4b09-bd6a-701bd27a9b38] Running
E1007 13:11:50.315072 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/no-preload-047383/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:11:52.876504 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/no-preload-047383/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003405568s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-j5txn" [d095ad6d-2e18-4b09-bd6a-701bd27a9b38] Running
E1007 13:11:57.998591 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/no-preload-047383/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017598488s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-491184 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-491184 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-491184 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-491184 -n embed-certs-491184
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-491184 -n embed-certs-491184: exit status 2 (358.337764ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-491184 -n embed-certs-491184
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-491184 -n embed-certs-491184: exit status 2 (331.931635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-491184 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-491184 -n embed-certs-491184
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-491184 -n embed-certs-491184
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-174967 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1007 13:12:08.239860 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/no-preload-047383/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:12:28.721542 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/no-preload-047383/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:12:34.374650 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/old-k8s-version-299687/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-174967 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (36.523113883s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-174967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-174967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.122593152s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-174967 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-174967 --alsologtostderr -v=3: (1.283444652s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-174967 -n newest-cni-174967
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-174967 -n newest-cni-174967: exit status 7 (68.564796ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-174967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-174967 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-174967 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (15.580402955s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-174967 -n newest-cni-174967
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-174967 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-174967 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-174967 -n newest-cni-174967
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-174967 -n newest-cni-174967: exit status 2 (322.699274ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-174967 -n newest-cni-174967
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-174967 -n newest-cni-174967: exit status 2 (313.411663ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-174967 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-174967 -n newest-cni-174967
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-174967 -n newest-cni-174967
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-617489 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1007 13:13:09.683327 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/no-preload-047383/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-617489 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (53.012215932s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dj5tx" [44ab55b5-81d9-4c84-be9b-e06b9ab6c9e1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003768262s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dj5tx" [44ab55b5-81d9-4c84-be9b-e06b9ab6c9e1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004665484s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-654708 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-654708 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-654708 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-654708 --alsologtostderr -v=1: (1.014237732s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-654708 -n default-k8s-diff-port-654708
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-654708 -n default-k8s-diff-port-654708: exit status 2 (444.738028ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-654708 -n default-k8s-diff-port-654708
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-654708 -n default-k8s-diff-port-654708: exit status 2 (431.94003ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-654708 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-654708 -n default-k8s-diff-port-654708
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-654708 -n default-k8s-diff-port-654708
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.84s)
E1007 13:18:44.289454 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/default-k8s-diff-port-654708/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:01.957879 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/auto-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:01.964216 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/auto-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:01.975549 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/auto-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:01.996885 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/auto-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:02.038693 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/auto-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:02.120402 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/auto-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:02.282407 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/auto-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:02.604263 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/auto-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:03.246448 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/auto-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:04.528380 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/auto-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:04.770850 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/default-k8s-diff-port-654708/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:07.090594 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/auto-617489/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (52.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-617489 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-617489 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (52.723115617s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (52.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-617489 "pgrep -a kubelet"
I1007 13:14:01.619983 1178462 config.go:182] Loaded profile config "auto-617489": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-617489 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qjrhc" [ad54cdff-bf10-4b13-b2f4-96ba31e3e49a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qjrhc" [ad54cdff-bf10-4b13-b2f4-96ba31e3e49a] Running
E1007 13:14:13.793761 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004265184s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-617489 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-617489 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-617489 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-spjwd" [ec37fc76-aa0d-492b-beb1-112fc6be2af9] Running
E1007 13:14:30.727314 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:14:31.604602 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/no-preload-047383/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00456412s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-617489 "pgrep -a kubelet"
I1007 13:14:32.227682 1178462 config.go:182] Loaded profile config "kindnet-617489": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-617489 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zb5dd" [43390714-aab6-4aeb-91ba-b4afb4084a73] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zb5dd" [43390714-aab6-4aeb-91ba-b4afb4084a73] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003846563s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-617489 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-617489 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m9.825983757s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-617489 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-617489 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-617489 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-617489 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-617489 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m3.867704622s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-szgdd" [dc7b531b-7556-4ea4-a70f-0acf663d89dc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005497985s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-617489 "pgrep -a kubelet"
I1007 13:15:52.121773 1178462 config.go:182] Loaded profile config "calico-617489": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-617489 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rmqr8" [134e7115-a948-4722-a46c-7425537ef3e5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rmqr8" [134e7115-a948-4722-a46c-7425537ef3e5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003597829s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-617489 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-617489 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-617489 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-617489 "pgrep -a kubelet"
I1007 13:16:14.599826 1178462 config.go:182] Loaded profile config "custom-flannel-617489": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-617489 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5t4tf" [fd20796b-5fdb-4248-943e-c5a6ef7fcf47] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5t4tf" [fd20796b-5fdb-4248-943e-c5a6ef7fcf47] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004315664s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-617489 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-617489 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-617489 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m24.347529743s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-617489 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-617489 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-617489 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1007 13:17:06.664726 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/old-k8s-version-299687/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:17:15.446153 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/no-preload-047383/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-617489 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (58.373516501s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-617489 "pgrep -a kubelet"
I1007 13:17:52.758761 1178462 config.go:182] Loaded profile config "enable-default-cni-617489": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-617489 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vgg9z" [66f680fd-097f-4a0f-96cf-f53fc1307147] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vgg9z" [66f680fd-097f-4a0f-96cf-f53fc1307147] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004757892s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-tkvbl" [9066ed7b-54c8-4a65-a5b4-6be1c041d6de] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004410289s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-617489 "pgrep -a kubelet"
I1007 13:17:59.457138 1178462 config.go:182] Loaded profile config "flannel-617489": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-617489 replace --force -f testdata/netcat-deployment.yaml
I1007 13:17:59.726423 1178462 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-92p8w" [2c4462d8-cb41-41dd-9415-0368a110c81a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-92p8w" [2c4462d8-cb41-41dd-9415-0368a110c81a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004541815s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-617489 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-617489 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-617489 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-617489 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-617489 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-617489 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (44.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-617489 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1007 13:18:26.362332 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/default-k8s-diff-port-654708/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:18:28.925591 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/default-k8s-diff-port-654708/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:18:34.047144 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/default-k8s-diff-port-654708/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-617489 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (44.802755981s)
--- PASS: TestNetworkPlugins/group/bridge/Start (44.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-617489 "pgrep -a kubelet"
I1007 13:19:10.544152 1178462 config.go:182] Loaded profile config "bridge-617489": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-617489 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pd8qf" [95d5c670-d314-475d-bc6c-a2ea09c0a704] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1007 13:19:12.211923 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/auto-617489/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-pd8qf" [95d5c670-d314-475d-bc6c-a2ea09c0a704] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003358549s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (25.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-617489 exec deployment/netcat -- nslookup kubernetes.default
E1007 13:19:22.453994 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/auto-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:25.855219 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/kindnet-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:25.861674 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/kindnet-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:25.873029 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/kindnet-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:25.894494 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/kindnet-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:25.935909 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/kindnet-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:26.017350 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/kindnet-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:26.178832 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/kindnet-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:26.500641 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/kindnet-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:27.142754 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/kindnet-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:28.424388 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/kindnet-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:30.726645 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/functional-809471/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:30.986261 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/kindnet-617489/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-617489 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.175566835s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1007 13:19:35.962327 1178462 retry.go:31] will retry after 538.939747ms: exit status 1
E1007 13:19:36.108559 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/kindnet-617489/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Run:  kubectl --context bridge-617489 exec deployment/netcat -- nslookup kubernetes.default
E1007 13:19:42.935439 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/auto-617489/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:45.732393 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/default-k8s-diff-port-654708/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:46.350702 1178462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/kindnet-617489/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Done: kubectl --context bridge-617489 exec deployment/netcat -- nslookup kubernetes.default: (10.180681708s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (25.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-617489 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-617489 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (29/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-790369 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-790369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-790369
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:785: skipping: crio not supported
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-504513 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-536135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-536135
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-617489 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-617489

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-617489

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-617489

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-617489

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-617489

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-617489

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-617489

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-617489

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-617489

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-617489

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-617489

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-617489" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-617489" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Oct 2024 12:52:51 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-068891
contexts:
- context:
cluster: kubernetes-upgrade-068891
user: kubernetes-upgrade-068891
name: kubernetes-upgrade-068891
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-068891
user:
client-certificate: /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/kubernetes-upgrade-068891/client.crt
client-key: /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/kubernetes-upgrade-068891/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-617489

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-617489"

                                                
                                                
----------------------- debugLogs end: kubenet-617489 [took: 3.681227318s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-617489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-617489
--- SKIP: TestNetworkPlugins/group/kubenet (3.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-617489 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-617489

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-617489

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-617489

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-617489

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-617489

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-617489

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-617489

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-617489

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-617489

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-617489

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-617489

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-617489" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-617489

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-617489

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-617489

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-617489

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-617489" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-617489" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19763-1173066/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Oct 2024 12:52:51 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-068891
contexts:
- context:
cluster: kubernetes-upgrade-068891
user: kubernetes-upgrade-068891
name: kubernetes-upgrade-068891
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-068891
user:
client-certificate: /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/kubernetes-upgrade-068891/client.crt
client-key: /home/jenkins/minikube-integration/19763-1173066/.minikube/profiles/kubernetes-upgrade-068891/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-617489

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-617489" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-617489"

                                                
                                                
----------------------- debugLogs end: cilium-617489 [took: 4.378165589s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-617489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-617489
--- SKIP: TestNetworkPlugins/group/cilium (4.54s)

                                                
                                    
Copied to clipboard