Test Report: Docker_Linux_crio_arm64 19546

                    
                      9c905d7ddc6fcb24a41b70e16c9a4a5dd3740602:2024-10-04:36493
                    
                

Test fail (5/323)

Order failed test Duration
32 TestAddons/serial/GCPAuth/PullSecret 480.88
35 TestAddons/parallel/Ingress 152.39
38 TestAddons/parallel/MetricsServer 364.16
175 TestMultiControlPlane/serial/RestartCluster 126.5
277 TestPause/serial/SecondStartNoReconfiguration 32.48
x
+
TestAddons/serial/GCPAuth/PullSecret (480.88s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:615: (dbg) Run:  kubectl --context addons-561541 create -f testdata/busybox.yaml
addons_test.go:622: (dbg) Run:  kubectl --context addons-561541 create sa gcp-auth-test
addons_test.go:628: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bda0a8b9-d255-4083-9afe-f4de2a62ec0d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:329: TestAddons/serial/GCPAuth/PullSecret: WARNING: pod list for "default" "integration-test=busybox" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:628: ***** TestAddons/serial/GCPAuth/PullSecret: pod "integration-test=busybox" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:628: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-561541 -n addons-561541
addons_test.go:628: TestAddons/serial/GCPAuth/PullSecret: showing logs for failed pods as of 2024-10-04 02:59:27.00714033 +0000 UTC m=+699.083708560
addons_test.go:628: (dbg) Run:  kubectl --context addons-561541 describe po busybox -n default
addons_test.go:628: (dbg) kubectl --context addons-561541 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-561541/192.168.49.2
Start Time:       Fri, 04 Oct 2024 02:51:26 +0000
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               10.244.0.22
IPs:
IP:  10.244.0.22
Containers:
busybox:
Container ID:  
Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sleep
3600
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gpc4x (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-gpc4x:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m1s                    default-scheduler  Successfully assigned default/busybox to addons-561541
Normal   Pulling    6m32s (x4 over 8m1s)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning  Failed     6m32s (x4 over 8m)      kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
Warning  Failed     6m32s (x4 over 8m)      kubelet            Error: ErrImagePull
Warning  Failed     6m18s (x6 over 7m59s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    2m53s (x21 over 7m59s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
addons_test.go:628: (dbg) Run:  kubectl --context addons-561541 logs busybox -n default
addons_test.go:628: (dbg) Non-zero exit: kubectl --context addons-561541 logs busybox -n default: exit status 1 (132.142308ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:628: kubectl --context addons-561541 logs busybox -n default: exit status 1
addons_test.go:630: wait: integration-test=busybox within 8m0s: context deadline exceeded
--- FAIL: TestAddons/serial/GCPAuth/PullSecret (480.88s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (152.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:208: (dbg) Run:  kubectl --context addons-561541 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:233: (dbg) Run:  kubectl --context addons-561541 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:246: (dbg) Run:  kubectl --context addons-561541 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:251: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4e85b3fa-bcca-4e09-848f-6fdf0fb76df3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4e85b3fa-bcca-4e09-848f-6fdf0fb76df3] Running
addons_test.go:251: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003283539s
I1004 03:01:30.339883    7560 kapi.go:150] Service nginx in namespace default found.
addons_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p addons-561541 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-561541 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.551073576s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:279: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:287: (dbg) Run:  kubectl --context addons-561541 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:292: (dbg) Run:  out/minikube-linux-arm64 -p addons-561541 ip
addons_test.go:298: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-561541
helpers_test.go:235: (dbg) docker inspect addons-561541:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1a05bccdb598d1519bc2517f2b858714fa72d144b3c45751c0d7ba2ea4a94d63",
	        "Created": "2024-10-04T02:48:26.592380066Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8833,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-04T02:48:26.741857699Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/1a05bccdb598d1519bc2517f2b858714fa72d144b3c45751c0d7ba2ea4a94d63/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1a05bccdb598d1519bc2517f2b858714fa72d144b3c45751c0d7ba2ea4a94d63/hostname",
	        "HostsPath": "/var/lib/docker/containers/1a05bccdb598d1519bc2517f2b858714fa72d144b3c45751c0d7ba2ea4a94d63/hosts",
	        "LogPath": "/var/lib/docker/containers/1a05bccdb598d1519bc2517f2b858714fa72d144b3c45751c0d7ba2ea4a94d63/1a05bccdb598d1519bc2517f2b858714fa72d144b3c45751c0d7ba2ea4a94d63-json.log",
	        "Name": "/addons-561541",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-561541:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-561541",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cdbe499e5bb99b224e5a6a7bef44d8a42b419163309df824b5b164f76a7d5ba3-init/diff:/var/lib/docker/overlay2/113409e5ac8a20e78db05ebf8d2720874d391240a7f47648e5e10a2a0c89288f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cdbe499e5bb99b224e5a6a7bef44d8a42b419163309df824b5b164f76a7d5ba3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cdbe499e5bb99b224e5a6a7bef44d8a42b419163309df824b5b164f76a7d5ba3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cdbe499e5bb99b224e5a6a7bef44d8a42b419163309df824b5b164f76a7d5ba3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-561541",
	                "Source": "/var/lib/docker/volumes/addons-561541/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-561541",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-561541",
	                "name.minikube.sigs.k8s.io": "addons-561541",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "52b0e926b39a732ca349c0438ff36c69068bb9900ade82646dffd0cb2af9a447",
	            "SandboxKey": "/var/run/docker/netns/52b0e926b39a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-561541": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "9afb70ed7dfa63e157614e8e9c2bfa4a942ca170e167f5154862dbc2e3597630",
	                    "EndpointID": "1833c5b85db81f3386a924800b61d01e82129db0e3bf1c5cc01984e503d957b6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-561541",
	                        "1a05bccdb598"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-561541 -n addons-561541
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-561541 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-561541 logs -n 25: (1.590422408s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-010684                                                                     | download-only-010684   | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:48 UTC |
	| start   | --download-only -p                                                                          | download-docker-973464 | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC |                     |
	|         | download-docker-973464                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-973464                                                                   | download-docker-973464 | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:48 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-541238   | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC |                     |
	|         | binary-mirror-541238                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36901                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-541238                                                                     | binary-mirror-541238   | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:48 UTC |
	| addons  | enable dashboard -p                                                                         | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC |                     |
	|         | addons-561541                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC |                     |
	|         | addons-561541                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-561541 --wait=true                                                                | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:51 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=logviewer                                                                          |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-561541 addons disable                                                                | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 02:51 UTC | 04 Oct 24 02:51 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-561541 addons disable                                                                | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 02:59 UTC | 04 Oct 24 02:59 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-561541 ip                                                                            | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 02:59 UTC | 04 Oct 24 02:59 UTC |
	| addons  | addons-561541 addons disable                                                                | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 02:59 UTC | 04 Oct 24 02:59 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-561541 addons disable                                                                | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | -p addons-561541                                                                            |                        |         |         |                     |                     |
	| addons  | addons-561541 addons                                                                        | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-561541 addons                                                                        | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-561541 ssh cat                                                                       | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | /opt/local-path-provisioner/pvc-7e10a70c-e181-4d72-a74e-5076f85972f6_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-561541 addons disable                                                                | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:01 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-561541 addons                                                                        | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | -p addons-561541                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-561541 addons disable                                                                | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-561541 addons disable                                                                | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC | 04 Oct 24 03:01 UTC |
	|         | logviewer --alsologtostderr                                                                 |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-561541 addons                                                                        | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC | 04 Oct 24 03:01 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-561541 ssh curl -s                                                                   | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-561541 ip                                                                            | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:03 UTC | 04 Oct 24 03:03 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 02:48:02
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 02:48:02.163776    8328 out.go:345] Setting OutFile to fd 1 ...
	I1004 02:48:02.163979    8328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:48:02.163993    8328 out.go:358] Setting ErrFile to fd 2...
	I1004 02:48:02.163999    8328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:48:02.164385    8328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-2238/.minikube/bin
	I1004 02:48:02.164859    8328 out.go:352] Setting JSON to false
	I1004 02:48:02.165601    8328 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1828,"bootTime":1728008255,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1004 02:48:02.165673    8328 start.go:139] virtualization:  
	I1004 02:48:02.168414    8328 out.go:177] * [addons-561541] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1004 02:48:02.169975    8328 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 02:48:02.170033    8328 notify.go:220] Checking for updates...
	I1004 02:48:02.173077    8328 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 02:48:02.174325    8328 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-2238/kubeconfig
	I1004 02:48:02.175444    8328 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-2238/.minikube
	I1004 02:48:02.176798    8328 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1004 02:48:02.177957    8328 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 02:48:02.179335    8328 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 02:48:02.200601    8328 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1004 02:48:02.200736    8328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 02:48:02.261920    8328 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-04 02:48:02.252897049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 02:48:02.262037    8328 docker.go:318] overlay module found
	I1004 02:48:02.263554    8328 out.go:177] * Using the docker driver based on user configuration
	I1004 02:48:02.265082    8328 start.go:297] selected driver: docker
	I1004 02:48:02.265098    8328 start.go:901] validating driver "docker" against <nil>
	I1004 02:48:02.265111    8328 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 02:48:02.265778    8328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 02:48:02.320232    8328 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-04 02:48:02.302813386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 02:48:02.320442    8328 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 02:48:02.320665    8328 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 02:48:02.322048    8328 out.go:177] * Using Docker driver with root privileges
	I1004 02:48:02.323354    8328 cni.go:84] Creating CNI manager for ""
	I1004 02:48:02.323420    8328 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1004 02:48:02.323435    8328 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1004 02:48:02.323507    8328 start.go:340] cluster config:
	{Name:addons-561541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-561541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 02:48:02.324871    8328 out.go:177] * Starting "addons-561541" primary control-plane node in "addons-561541" cluster
	I1004 02:48:02.326445    8328 cache.go:121] Beginning downloading kic base image for docker with crio
	I1004 02:48:02.328038    8328 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1004 02:48:02.329392    8328 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 02:48:02.329444    8328 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1004 02:48:02.329458    8328 cache.go:56] Caching tarball of preloaded images
	I1004 02:48:02.329485    8328 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1004 02:48:02.329548    8328 preload.go:172] Found /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1004 02:48:02.329559    8328 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 02:48:02.329903    8328 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/config.json ...
	I1004 02:48:02.329967    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/config.json: {Name:mk5d51ff6027cfca40f377ff0780690a0b7c7e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:02.343615    8328 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1004 02:48:02.343745    8328 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1004 02:48:02.343767    8328 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1004 02:48:02.343772    8328 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1004 02:48:02.343782    8328 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1004 02:48:02.343795    8328 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from local cache
	I1004 02:48:19.187044    8328 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from cached tarball
	I1004 02:48:19.187078    8328 cache.go:194] Successfully downloaded all kic artifacts
	I1004 02:48:19.187118    8328 start.go:360] acquireMachinesLock for addons-561541: {Name:mk28445b2742a1e7724f7048fe9efccb251276cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 02:48:19.187243    8328 start.go:364] duration metric: took 108.101µs to acquireMachinesLock for "addons-561541"
	I1004 02:48:19.187270    8328 start.go:93] Provisioning new machine with config: &{Name:addons-561541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-561541 Namespace:default APIServerHAVIP: APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 02:48:19.187349    8328 start.go:125] createHost starting for "" (driver="docker")
	I1004 02:48:19.194304    8328 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1004 02:48:19.194570    8328 start.go:159] libmachine.API.Create for "addons-561541" (driver="docker")
	I1004 02:48:19.194610    8328 client.go:168] LocalClient.Create starting
	I1004 02:48:19.194732    8328 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem
	I1004 02:48:19.722917    8328 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem
	I1004 02:48:20.262959    8328 cli_runner.go:164] Run: docker network inspect addons-561541 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1004 02:48:20.278663    8328 cli_runner.go:211] docker network inspect addons-561541 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1004 02:48:20.278757    8328 network_create.go:284] running [docker network inspect addons-561541] to gather additional debugging logs...
	I1004 02:48:20.278778    8328 cli_runner.go:164] Run: docker network inspect addons-561541
	W1004 02:48:20.299351    8328 cli_runner.go:211] docker network inspect addons-561541 returned with exit code 1
	I1004 02:48:20.299389    8328 network_create.go:287] error running [docker network inspect addons-561541]: docker network inspect addons-561541: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-561541 not found
	I1004 02:48:20.299402    8328 network_create.go:289] output of [docker network inspect addons-561541]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-561541 not found
	
	** /stderr **
	I1004 02:48:20.299499    8328 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1004 02:48:20.314626    8328 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001805ee0}
	I1004 02:48:20.314670    8328 network_create.go:124] attempt to create docker network addons-561541 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1004 02:48:20.314727    8328 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-561541 addons-561541
	I1004 02:48:20.387495    8328 network_create.go:108] docker network addons-561541 192.168.49.0/24 created
	I1004 02:48:20.387530    8328 kic.go:121] calculated static IP "192.168.49.2" for the "addons-561541" container
	I1004 02:48:20.387612    8328 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1004 02:48:20.402489    8328 cli_runner.go:164] Run: docker volume create addons-561541 --label name.minikube.sigs.k8s.io=addons-561541 --label created_by.minikube.sigs.k8s.io=true
	I1004 02:48:20.419041    8328 oci.go:103] Successfully created a docker volume addons-561541
	I1004 02:48:20.419132    8328 cli_runner.go:164] Run: docker run --rm --name addons-561541-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-561541 --entrypoint /usr/bin/test -v addons-561541:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib
	I1004 02:48:22.495377    8328 cli_runner.go:217] Completed: docker run --rm --name addons-561541-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-561541 --entrypoint /usr/bin/test -v addons-561541:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib: (2.076203892s)
	I1004 02:48:22.495406    8328 oci.go:107] Successfully prepared a docker volume addons-561541
	I1004 02:48:22.495429    8328 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 02:48:22.495447    8328 kic.go:194] Starting extracting preloaded images to volume ...
	I1004 02:48:22.495514    8328 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-561541:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir
	I1004 02:48:26.520187    8328 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-561541:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir: (4.024631682s)
	I1004 02:48:26.520227    8328 kic.go:203] duration metric: took 4.024777009s to extract preloaded images to volume ...
	W1004 02:48:26.520375    8328 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1004 02:48:26.520510    8328 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1004 02:48:26.578676    8328 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-561541 --name addons-561541 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-561541 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-561541 --network addons-561541 --ip 192.168.49.2 --volume addons-561541:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122
	I1004 02:48:26.906403    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Running}}
	I1004 02:48:26.925383    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:26.954872    8328 cli_runner.go:164] Run: docker exec addons-561541 stat /var/lib/dpkg/alternatives/iptables
	I1004 02:48:27.024973    8328 oci.go:144] the created container "addons-561541" has a running status.
	I1004 02:48:27.025003    8328 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa...
	I1004 02:48:27.507752    8328 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1004 02:48:27.536001    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:27.564609    8328 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1004 02:48:27.564639    8328 kic_runner.go:114] Args: [docker exec --privileged addons-561541 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1004 02:48:27.632721    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:27.652065    8328 machine.go:93] provisionDockerMachine start ...
	I1004 02:48:27.652172    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:27.677592    8328 main.go:141] libmachine: Using SSH client type: native
	I1004 02:48:27.677857    8328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1004 02:48:27.677872    8328 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 02:48:27.828909    8328 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-561541
	
	I1004 02:48:27.828935    8328 ubuntu.go:169] provisioning hostname "addons-561541"
	I1004 02:48:27.829023    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:27.852868    8328 main.go:141] libmachine: Using SSH client type: native
	I1004 02:48:27.853109    8328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1004 02:48:27.853126    8328 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-561541 && echo "addons-561541" | sudo tee /etc/hostname
	I1004 02:48:28.009681    8328 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-561541
	
	I1004 02:48:28.009815    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:28.029190    8328 main.go:141] libmachine: Using SSH client type: native
	I1004 02:48:28.029544    8328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1004 02:48:28.029569    8328 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-561541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-561541/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-561541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 02:48:28.165042    8328 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 02:48:28.165069    8328 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19546-2238/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-2238/.minikube}
	I1004 02:48:28.165102    8328 ubuntu.go:177] setting up certificates
	I1004 02:48:28.165112    8328 provision.go:84] configureAuth start
	I1004 02:48:28.165178    8328 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-561541
	I1004 02:48:28.182797    8328 provision.go:143] copyHostCerts
	I1004 02:48:28.182883    8328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/key.pem (1679 bytes)
	I1004 02:48:28.183003    8328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/ca.pem (1082 bytes)
	I1004 02:48:28.183083    8328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/cert.pem (1123 bytes)
	I1004 02:48:28.183138    8328 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem org=jenkins.addons-561541 san=[127.0.0.1 192.168.49.2 addons-561541 localhost minikube]
	I1004 02:48:28.508841    8328 provision.go:177] copyRemoteCerts
	I1004 02:48:28.508932    8328 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 02:48:28.508983    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:28.525627    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:28.625563    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 02:48:28.648696    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1004 02:48:28.671918    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 02:48:28.694810    8328 provision.go:87] duration metric: took 529.671211ms to configureAuth
	I1004 02:48:28.694837    8328 ubuntu.go:193] setting minikube options for container-runtime
	I1004 02:48:28.695050    8328 config.go:182] Loaded profile config "addons-561541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 02:48:28.695157    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:28.711922    8328 main.go:141] libmachine: Using SSH client type: native
	I1004 02:48:28.712204    8328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1004 02:48:28.712226    8328 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 02:48:28.940754    8328 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 02:48:28.940822    8328 machine.go:96] duration metric: took 1.288733565s to provisionDockerMachine
	I1004 02:48:28.940846    8328 client.go:171] duration metric: took 9.746224774s to LocalClient.Create
	I1004 02:48:28.940899    8328 start.go:167] duration metric: took 9.746311502s to libmachine.API.Create "addons-561541"
	I1004 02:48:28.940924    8328 start.go:293] postStartSetup for "addons-561541" (driver="docker")
	I1004 02:48:28.940950    8328 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 02:48:28.941086    8328 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 02:48:28.941163    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:28.958400    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:29.054204    8328 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 02:48:29.057308    8328 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1004 02:48:29.057353    8328 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1004 02:48:29.057365    8328 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1004 02:48:29.057378    8328 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1004 02:48:29.057397    8328 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-2238/.minikube/addons for local assets ...
	I1004 02:48:29.057476    8328 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-2238/.minikube/files for local assets ...
	I1004 02:48:29.057506    8328 start.go:296] duration metric: took 116.563771ms for postStartSetup
	I1004 02:48:29.057879    8328 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-561541
	I1004 02:48:29.073843    8328 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/config.json ...
	I1004 02:48:29.074139    8328 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 02:48:29.074189    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:29.090614    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:29.181670    8328 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1004 02:48:29.185928    8328 start.go:128] duration metric: took 9.998563394s to createHost
	I1004 02:48:29.185997    8328 start.go:83] releasing machines lock for "addons-561541", held for 9.99874347s
	I1004 02:48:29.186080    8328 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-561541
	I1004 02:48:29.202215    8328 ssh_runner.go:195] Run: cat /version.json
	I1004 02:48:29.202265    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:29.202273    8328 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 02:48:29.202343    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:29.223326    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:29.223493    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:29.454866    8328 ssh_runner.go:195] Run: systemctl --version
	I1004 02:48:29.459013    8328 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 02:48:29.598778    8328 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1004 02:48:29.602816    8328 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 02:48:29.621156    8328 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1004 02:48:29.621279    8328 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 02:48:29.649362    8328 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1004 02:48:29.649384    8328 start.go:495] detecting cgroup driver to use...
	I1004 02:48:29.649428    8328 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1004 02:48:29.649496    8328 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 02:48:29.664664    8328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 02:48:29.675126    8328 docker.go:217] disabling cri-docker service (if available) ...
	I1004 02:48:29.675213    8328 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 02:48:29.688214    8328 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 02:48:29.702371    8328 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 02:48:29.782447    8328 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 02:48:29.884035    8328 docker.go:233] disabling docker service ...
	I1004 02:48:29.884133    8328 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 02:48:29.903634    8328 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 02:48:29.915810    8328 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 02:48:30.005243    8328 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 02:48:30.108036    8328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 02:48:30.120992    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 02:48:30.140144    8328 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 02:48:30.140251    8328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:48:30.151213    8328 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 02:48:30.151334    8328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:48:30.162049    8328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:48:30.172548    8328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:48:30.182879    8328 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 02:48:30.192518    8328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:48:30.202556    8328 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:48:30.218816    8328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:48:30.228798    8328 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 02:48:30.237642    8328 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 02:48:30.237708    8328 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 02:48:30.251373    8328 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 02:48:30.260317    8328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 02:48:30.338222    8328 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 02:48:30.444495    8328 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 02:48:30.444598    8328 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 02:48:30.448249    8328 start.go:563] Will wait 60s for crictl version
	I1004 02:48:30.448367    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:48:30.451773    8328 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 02:48:30.496757    8328 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1004 02:48:30.496962    8328 ssh_runner.go:195] Run: crio --version
	I1004 02:48:30.535478    8328 ssh_runner.go:195] Run: crio --version
	I1004 02:48:30.575596    8328 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1004 02:48:30.577393    8328 cli_runner.go:164] Run: docker network inspect addons-561541 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1004 02:48:30.592973    8328 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1004 02:48:30.596544    8328 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 02:48:30.607213    8328 kubeadm.go:883] updating cluster {Name:addons-561541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-561541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 02:48:30.607336    8328 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 02:48:30.607389    8328 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 02:48:30.680284    8328 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 02:48:30.680308    8328 crio.go:433] Images already preloaded, skipping extraction
	I1004 02:48:30.680369    8328 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 02:48:30.716735    8328 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 02:48:30.716761    8328 cache_images.go:84] Images are preloaded, skipping loading
	I1004 02:48:30.716770    8328 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I1004 02:48:30.716857    8328 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-561541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-561541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 02:48:30.716937    8328 ssh_runner.go:195] Run: crio config
	I1004 02:48:30.770881    8328 cni.go:84] Creating CNI manager for ""
	I1004 02:48:30.770903    8328 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1004 02:48:30.770913    8328 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 02:48:30.770941    8328 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-561541 NodeName:addons-561541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 02:48:30.771100    8328 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-561541"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 02:48:30.771180    8328 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 02:48:30.779940    8328 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 02:48:30.780022    8328 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 02:48:30.788511    8328 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1004 02:48:30.805874    8328 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 02:48:30.823701    8328 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I1004 02:48:30.841418    8328 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1004 02:48:30.844797    8328 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 02:48:30.856005    8328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 02:48:30.934755    8328 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 02:48:30.949008    8328 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541 for IP: 192.168.49.2
	I1004 02:48:30.949033    8328 certs.go:194] generating shared ca certs ...
	I1004 02:48:30.949050    8328 certs.go:226] acquiring lock for ca certs: {Name:mk468b07ab6620fd74cefc3667e1a8643008ce5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:30.949173    8328 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key
	I1004 02:48:31.188600    8328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt ...
	I1004 02:48:31.188632    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt: {Name:mk85bb8ad320af02292bb5af5763b5687fc2c71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:31.188832    8328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key ...
	I1004 02:48:31.188845    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key: {Name:mkf88c660188079b3d7cb04d43c22f4d16f00ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:31.188940    8328 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key
	I1004 02:48:31.432161    8328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.crt ...
	I1004 02:48:31.432192    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.crt: {Name:mkcbaf945ec67de02c8c92440fa4864dff75ef93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:31.432414    8328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key ...
	I1004 02:48:31.432429    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key: {Name:mk64a6b53b8d8913780411f2edaf4bbe5b2e2be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:31.432520    8328 certs.go:256] generating profile certs ...
	I1004 02:48:31.432579    8328 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.key
	I1004 02:48:31.432596    8328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt with IP's: []
	I1004 02:48:32.274868    8328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt ...
	I1004 02:48:32.274905    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: {Name:mk77a3876305a8cb8211156243bd37074c11c7eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:32.275109    8328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.key ...
	I1004 02:48:32.275121    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.key: {Name:mkad3a95dd66c000812d4bac0a0c5f17f6bccd6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:32.275208    8328 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.key.32857c0e
	I1004 02:48:32.275228    8328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.crt.32857c0e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1004 02:48:32.826126    8328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.crt.32857c0e ...
	I1004 02:48:32.826157    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.crt.32857c0e: {Name:mk09838633f5dc8e87cc56a6ac4328b525754f19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:32.826379    8328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.key.32857c0e ...
	I1004 02:48:32.826393    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.key.32857c0e: {Name:mk7085a89dcc6b89aa5d083374666fc9f9a6ebfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:32.826495    8328 certs.go:381] copying /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.crt.32857c0e -> /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.crt
	I1004 02:48:32.826583    8328 certs.go:385] copying /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.key.32857c0e -> /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.key
	I1004 02:48:32.826650    8328 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/proxy-client.key
	I1004 02:48:32.826669    8328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/proxy-client.crt with IP's: []
	I1004 02:48:33.004281    8328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/proxy-client.crt ...
	I1004 02:48:33.004311    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/proxy-client.crt: {Name:mk0273fcd7c5e6aa98ec0921888aa45a3c335bcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:33.004504    8328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/proxy-client.key ...
	I1004 02:48:33.004517    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/proxy-client.key: {Name:mkbb6adcbfb968918aa7d55d4a3c911d213bc33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:33.004709    8328 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 02:48:33.004748    8328 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem (1082 bytes)
	I1004 02:48:33.004791    8328 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem (1123 bytes)
	I1004 02:48:33.004821    8328 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem (1679 bytes)
	I1004 02:48:33.005446    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 02:48:33.031137    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 02:48:33.056300    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 02:48:33.080349    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 02:48:33.104544    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1004 02:48:33.129757    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 02:48:33.153524    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 02:48:33.177465    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 02:48:33.201240    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 02:48:33.233724    8328 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 02:48:33.250882    8328 ssh_runner.go:195] Run: openssl version
	I1004 02:48:33.256372    8328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 02:48:33.267260    8328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:48:33.270699    8328 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:48 /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:48:33.270779    8328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:48:33.277391    8328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 02:48:33.286832    8328 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 02:48:33.290066    8328 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 02:48:33.290112    8328 kubeadm.go:392] StartCluster: {Name:addons-561541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-561541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 02:48:33.290190    8328 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 02:48:33.290248    8328 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 02:48:33.325537    8328 cri.go:89] found id: ""
	I1004 02:48:33.325654    8328 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 02:48:33.334174    8328 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 02:48:33.342726    8328 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1004 02:48:33.342828    8328 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 02:48:33.351242    8328 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 02:48:33.351261    8328 kubeadm.go:157] found existing configuration files:
	
	I1004 02:48:33.351308    8328 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 02:48:33.359715    8328 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 02:48:33.359793    8328 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 02:48:33.369159    8328 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 02:48:33.378101    8328 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 02:48:33.378190    8328 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 02:48:33.386679    8328 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 02:48:33.395688    8328 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 02:48:33.395754    8328 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 02:48:33.404303    8328 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 02:48:33.412666    8328 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 02:48:33.412755    8328 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 02:48:33.420849    8328 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1004 02:48:33.459536    8328 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1004 02:48:33.459655    8328 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 02:48:33.479195    8328 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1004 02:48:33.479287    8328 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1004 02:48:33.479341    8328 kubeadm.go:310] OS: Linux
	I1004 02:48:33.479403    8328 kubeadm.go:310] CGROUPS_CPU: enabled
	I1004 02:48:33.479471    8328 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1004 02:48:33.479535    8328 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1004 02:48:33.479601    8328 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1004 02:48:33.479668    8328 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1004 02:48:33.479765    8328 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1004 02:48:33.479845    8328 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1004 02:48:33.479913    8328 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1004 02:48:33.479978    8328 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1004 02:48:33.538372    8328 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 02:48:33.538512    8328 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 02:48:33.538605    8328 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1004 02:48:33.547733    8328 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 02:48:33.550706    8328 out.go:235]   - Generating certificates and keys ...
	I1004 02:48:33.550813    8328 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 02:48:33.550894    8328 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 02:48:34.116405    8328 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 02:48:34.992261    8328 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1004 02:48:35.858654    8328 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1004 02:48:36.525746    8328 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1004 02:48:36.873613    8328 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1004 02:48:36.873759    8328 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-561541 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1004 02:48:38.109795    8328 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1004 02:48:38.110112    8328 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-561541 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1004 02:48:38.275211    8328 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 02:48:39.064025    8328 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 02:48:39.444378    8328 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1004 02:48:39.444574    8328 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 02:48:39.710488    8328 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 02:48:40.070453    8328 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1004 02:48:40.302074    8328 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 02:48:41.044572    8328 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 02:48:41.573391    8328 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 02:48:41.574157    8328 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 02:48:41.577402    8328 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 02:48:41.578974    8328 out.go:235]   - Booting up control plane ...
	I1004 02:48:41.579076    8328 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 02:48:41.579160    8328 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 02:48:41.580167    8328 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 02:48:41.590213    8328 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 02:48:41.596069    8328 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 02:48:41.596129    8328 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 02:48:41.687080    8328 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1004 02:48:41.687202    8328 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1004 02:48:42.703341    8328 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.016535631s
	I1004 02:48:42.703488    8328 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1004 02:48:48.205075    8328 kubeadm.go:310] [api-check] The API server is healthy after 5.501721486s
	I1004 02:48:48.224911    8328 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 02:48:48.238228    8328 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 02:48:48.263116    8328 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 02:48:48.263312    8328 kubeadm.go:310] [mark-control-plane] Marking the node addons-561541 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 02:48:48.274266    8328 kubeadm.go:310] [bootstrap-token] Using token: 2237cm.h1kig5501t3tmep9
	I1004 02:48:48.277001    8328 out.go:235]   - Configuring RBAC rules ...
	I1004 02:48:48.277134    8328 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 02:48:48.281449    8328 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 02:48:48.289446    8328 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 02:48:48.293667    8328 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 02:48:48.297547    8328 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 02:48:48.302512    8328 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 02:48:48.612029    8328 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 02:48:49.070634    8328 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1004 02:48:49.611902    8328 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1004 02:48:49.613766    8328 kubeadm.go:310] 
	I1004 02:48:49.613847    8328 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1004 02:48:49.613860    8328 kubeadm.go:310] 
	I1004 02:48:49.613938    8328 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1004 02:48:49.613947    8328 kubeadm.go:310] 
	I1004 02:48:49.613973    8328 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1004 02:48:49.614035    8328 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 02:48:49.614088    8328 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 02:48:49.614097    8328 kubeadm.go:310] 
	I1004 02:48:49.614150    8328 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1004 02:48:49.614157    8328 kubeadm.go:310] 
	I1004 02:48:49.614205    8328 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 02:48:49.614212    8328 kubeadm.go:310] 
	I1004 02:48:49.614264    8328 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1004 02:48:49.614357    8328 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 02:48:49.614440    8328 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 02:48:49.614451    8328 kubeadm.go:310] 
	I1004 02:48:49.614534    8328 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 02:48:49.614613    8328 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1004 02:48:49.614623    8328 kubeadm.go:310] 
	I1004 02:48:49.614707    8328 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2237cm.h1kig5501t3tmep9 \
	I1004 02:48:49.614812    8328 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aca64f2211befde5878f407d8185a64dfef5cf14c4e1f31b88bf71c58b586df2 \
	I1004 02:48:49.614835    8328 kubeadm.go:310] 	--control-plane 
	I1004 02:48:49.614842    8328 kubeadm.go:310] 
	I1004 02:48:49.614927    8328 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1004 02:48:49.614934    8328 kubeadm.go:310] 
	I1004 02:48:49.615015    8328 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2237cm.h1kig5501t3tmep9 \
	I1004 02:48:49.615119    8328 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aca64f2211befde5878f407d8185a64dfef5cf14c4e1f31b88bf71c58b586df2 
	I1004 02:48:49.618636    8328 kubeadm.go:310] W1004 02:48:33.456263    1180 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 02:48:49.618934    8328 kubeadm.go:310] W1004 02:48:33.457096    1180 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 02:48:49.619148    8328 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1004 02:48:49.619255    8328 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 02:48:49.619273    8328 cni.go:84] Creating CNI manager for ""
	I1004 02:48:49.619282    8328 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1004 02:48:49.622033    8328 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1004 02:48:49.624607    8328 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1004 02:48:49.628276    8328 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1004 02:48:49.628296    8328 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1004 02:48:49.646956    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1004 02:48:49.922758    8328 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 02:48:49.922829    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:49.922894    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-561541 minikube.k8s.io/updated_at=2024_10_04T02_48_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=addons-561541 minikube.k8s.io/primary=true
	I1004 02:48:50.103387    8328 ops.go:34] apiserver oom_adj: -16
	I1004 02:48:50.103524    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:50.603716    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:51.104550    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:51.604455    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:52.104368    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:52.604327    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:53.103912    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:53.603568    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:53.689859    8328 kubeadm.go:1113] duration metric: took 3.767088472s to wait for elevateKubeSystemPrivileges
	I1004 02:48:53.689886    8328 kubeadm.go:394] duration metric: took 20.399777885s to StartCluster
	I1004 02:48:53.689902    8328 settings.go:142] acquiring lock: {Name:mk9c80036423f55b2143f3dcbc4f16f5b78f75ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:53.690020    8328 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-2238/kubeconfig
	I1004 02:48:53.690421    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/kubeconfig: {Name:mkd1a87175976669e9a14c51acaef20b883a2130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:53.690615    8328 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 02:48:53.690745    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 02:48:53.690973    8328 config.go:182] Loaded profile config "addons-561541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 02:48:53.691007    8328 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:true metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1004 02:48:53.691089    8328 addons.go:69] Setting yakd=true in profile "addons-561541"
	I1004 02:48:53.691106    8328 addons.go:234] Setting addon yakd=true in "addons-561541"
	I1004 02:48:53.691129    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.691613    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.691995    8328 addons.go:69] Setting metrics-server=true in profile "addons-561541"
	I1004 02:48:53.692017    8328 addons.go:234] Setting addon metrics-server=true in "addons-561541"
	I1004 02:48:53.692039    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.692444    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.695667    8328 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-561541"
	I1004 02:48:53.695739    8328 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-561541"
	I1004 02:48:53.695879    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.696066    8328 addons.go:69] Setting registry=true in profile "addons-561541"
	I1004 02:48:53.696145    8328 addons.go:234] Setting addon registry=true in "addons-561541"
	I1004 02:48:53.696175    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.696617    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.696737    8328 addons.go:69] Setting cloud-spanner=true in profile "addons-561541"
	I1004 02:48:53.696753    8328 addons.go:234] Setting addon cloud-spanner=true in "addons-561541"
	I1004 02:48:53.696772    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.697179    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.699681    8328 addons.go:69] Setting storage-provisioner=true in profile "addons-561541"
	I1004 02:48:53.699753    8328 addons.go:234] Setting addon storage-provisioner=true in "addons-561541"
	I1004 02:48:53.699815    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.700544    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.701680    8328 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-561541"
	I1004 02:48:53.701738    8328 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-561541"
	I1004 02:48:53.701771    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.702264    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.702874    8328 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-561541"
	I1004 02:48:53.702893    8328 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-561541"
	I1004 02:48:53.703174    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.707557    8328 addons.go:69] Setting volcano=true in profile "addons-561541"
	I1004 02:48:53.707594    8328 addons.go:234] Setting addon volcano=true in "addons-561541"
	I1004 02:48:53.707629    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.708122    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.711944    8328 addons.go:69] Setting default-storageclass=true in profile "addons-561541"
	I1004 02:48:53.711982    8328 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-561541"
	I1004 02:48:53.712358    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.725311    8328 addons.go:69] Setting volumesnapshots=true in profile "addons-561541"
	I1004 02:48:53.725348    8328 addons.go:234] Setting addon volumesnapshots=true in "addons-561541"
	I1004 02:48:53.725387    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.725908    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.726053    8328 addons.go:69] Setting gcp-auth=true in profile "addons-561541"
	I1004 02:48:53.726076    8328 mustload.go:65] Loading cluster: addons-561541
	I1004 02:48:53.726232    8328 config.go:182] Loaded profile config "addons-561541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 02:48:53.726448    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.745304    8328 out.go:177] * Verifying Kubernetes components...
	I1004 02:48:53.745403    8328 addons.go:69] Setting ingress=true in profile "addons-561541"
	I1004 02:48:53.745421    8328 addons.go:234] Setting addon ingress=true in "addons-561541"
	I1004 02:48:53.745462    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.748470    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.750417    8328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 02:48:53.751247    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.758840    8328 addons.go:69] Setting ingress-dns=true in profile "addons-561541"
	I1004 02:48:53.758890    8328 addons.go:234] Setting addon ingress-dns=true in "addons-561541"
	I1004 02:48:53.758933    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.759545    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.782570    8328 addons.go:69] Setting inspektor-gadget=true in profile "addons-561541"
	I1004 02:48:53.782607    8328 addons.go:234] Setting addon inspektor-gadget=true in "addons-561541"
	I1004 02:48:53.782643    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.783133    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.816110    8328 addons.go:69] Setting logviewer=true in profile "addons-561541"
	I1004 02:48:53.816142    8328 addons.go:234] Setting addon logviewer=true in "addons-561541"
	I1004 02:48:53.816180    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.816656    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.862158    8328 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1004 02:48:53.865253    8328 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1004 02:48:53.868009    8328 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1004 02:48:53.877088    8328 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 02:48:53.877496    8328 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1004 02:48:53.877519    8328 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1004 02:48:53.877592    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:53.890315    8328 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 02:48:53.890336    8328 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 02:48:53.890398    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:53.916305    8328 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-561541"
	I1004 02:48:53.922199    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.922802    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.965662    8328 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:48:53.965689    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 02:48:53.965762    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:53.986061    8328 out.go:177]   - Using image docker.io/registry:2.8.3
	I1004 02:48:53.997322    8328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1004 02:48:54.000968    8328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1004 02:48:54.001123    8328 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1004 02:48:54.001137    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1004 02:48:54.001229    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.005586    8328 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1004 02:48:54.016828    8328 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1004 02:48:54.017331    8328 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1004 02:48:54.017347    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1004 02:48:54.017406    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.018766    8328 addons.go:234] Setting addon default-storageclass=true in "addons-561541"
	I1004 02:48:54.018802    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:54.019333    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:54.026926    8328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1004 02:48:54.026946    8328 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1004 02:48:54.027008    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.029303    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:54.041373    8328 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1004 02:48:54.042160    8328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1004 02:48:54.042224    8328 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	W1004 02:48:54.042456    8328 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1004 02:48:54.057777    8328 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1004 02:48:54.062510    8328 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1004 02:48:54.062570    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1004 02:48:54.062657    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.062817    8328 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1004 02:48:54.067457    8328 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1004 02:48:54.067521    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1004 02:48:54.067617    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.087710    8328 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1004 02:48:54.088133    8328 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1004 02:48:54.088164    8328 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1004 02:48:54.088243    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.092003    8328 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1004 02:48:54.094319    8328 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1004 02:48:54.094377    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1004 02:48:54.094465    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.115672    8328 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1004 02:48:54.118054    8328 out.go:177]   - Using image docker.io/ivans3/minikube-log-viewer:v1
	I1004 02:48:54.120870    8328 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1004 02:48:54.121147    8328 addons.go:431] installing /etc/kubernetes/addons/logviewer-dp-and-svc.yaml
	I1004 02:48:54.121194    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/logviewer-dp-and-svc.yaml (2016 bytes)
	I1004 02:48:54.121306    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.126276    8328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1004 02:48:54.128839    8328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1004 02:48:54.137969    8328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1004 02:48:54.147371    8328 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1004 02:48:54.147392    8328 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1004 02:48:54.147457    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.159174    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.163315    8328 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1004 02:48:54.163806    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.164278    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.168581    8328 out.go:177]   - Using image docker.io/busybox:stable
	I1004 02:48:54.171155    8328 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1004 02:48:54.171177    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1004 02:48:54.171241    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.181758    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.193801    8328 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 02:48:54.193821    8328 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 02:48:54.193881    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.205175    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.213139    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.221785    8328 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 02:48:54.221965    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 02:48:54.261607    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.266588    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.277445    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.287482    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.293708    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	W1004 02:48:54.297941    8328 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1004 02:48:54.297972    8328 retry.go:31] will retry after 169.493237ms: ssh: handshake failed: EOF
	I1004 02:48:54.298352    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	W1004 02:48:54.301417    8328 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1004 02:48:54.301439    8328 retry.go:31] will retry after 206.930752ms: ssh: handshake failed: EOF
	I1004 02:48:54.303548    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.312106    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.352364    8328 node_ready.go:35] waiting up to 6m0s for node "addons-561541" to be "Ready" ...
	I1004 02:48:54.583149    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 02:48:54.610971    8328 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 02:48:54.611042    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1004 02:48:54.620831    8328 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1004 02:48:54.620909    8328 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1004 02:48:54.635348    8328 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1004 02:48:54.635427    8328 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1004 02:48:54.646078    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:48:54.663927    8328 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1004 02:48:54.663998    8328 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1004 02:48:54.664675    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1004 02:48:54.691626    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1004 02:48:54.701113    8328 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1004 02:48:54.701183    8328 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1004 02:48:54.715590    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1004 02:48:54.726337    8328 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1004 02:48:54.726411    8328 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1004 02:48:54.764341    8328 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1004 02:48:54.764415    8328 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1004 02:48:54.813183    8328 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1004 02:48:54.813452    8328 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1004 02:48:54.825259    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1004 02:48:54.829539    8328 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 02:48:54.829608    8328 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 02:48:54.831870    8328 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1004 02:48:54.831931    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1004 02:48:54.858527    8328 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1004 02:48:54.858602    8328 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1004 02:48:54.917674    8328 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1004 02:48:54.917746    8328 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1004 02:48:54.995360    8328 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1004 02:48:54.995435    8328 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1004 02:48:54.999766    8328 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1004 02:48:54.999838    8328 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1004 02:48:55.011551    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1004 02:48:55.018540    8328 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 02:48:55.018616    8328 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 02:48:55.040682    8328 addons.go:431] installing /etc/kubernetes/addons/logviewer-rbac.yaml
	I1004 02:48:55.040757    8328 ssh_runner.go:362] scp logviewer/logviewer-rbac.yaml --> /etc/kubernetes/addons/logviewer-rbac.yaml (1064 bytes)
	I1004 02:48:55.048253    8328 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1004 02:48:55.048331    8328 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1004 02:48:55.088569    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1004 02:48:55.124060    8328 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1004 02:48:55.124140    8328 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1004 02:48:55.163467    8328 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1004 02:48:55.163536    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1004 02:48:55.171385    8328 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1004 02:48:55.171460    8328 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1004 02:48:55.197440    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 02:48:55.215875    8328 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1004 02:48:55.215951    8328 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1004 02:48:55.223700    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/logviewer-dp-and-svc.yaml -f /etc/kubernetes/addons/logviewer-rbac.yaml
	I1004 02:48:55.290202    8328 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1004 02:48:55.290290    8328 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1004 02:48:55.291108    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1004 02:48:55.294341    8328 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1004 02:48:55.294393    8328 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1004 02:48:55.341494    8328 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1004 02:48:55.341565    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1004 02:48:55.447078    8328 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1004 02:48:55.447155    8328 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1004 02:48:55.477667    8328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1004 02:48:55.477737    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1004 02:48:55.481333    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1004 02:48:55.581178    8328 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1004 02:48:55.581282    8328 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1004 02:48:55.587724    8328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1004 02:48:55.587796    8328 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1004 02:48:55.667514    8328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1004 02:48:55.667584    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1004 02:48:55.675740    8328 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1004 02:48:55.675813    8328 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1004 02:48:55.730121    8328 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1004 02:48:55.730180    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1004 02:48:55.743901    8328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1004 02:48:55.743971    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1004 02:48:55.781229    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1004 02:48:55.790360    8328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1004 02:48:55.790434    8328 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1004 02:48:55.873535    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1004 02:48:56.530616    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:48:57.112288    8328 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.890298073s)
	I1004 02:48:57.112371    8328 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1004 02:48:57.750373    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.167118728s)
	I1004 02:48:57.903742    8328 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-561541" context rescaled to 1 replicas
	I1004 02:48:58.567137    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:48:59.233865    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.58770567s)
	I1004 02:49:00.512635    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.847890993s)
	I1004 02:49:00.512669    8328 addons.go:475] Verifying addon ingress=true in "addons-561541"
	I1004 02:49:00.512847    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.82115089s)
	I1004 02:49:00.512978    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.797329059s)
	I1004 02:49:00.513060    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.687741073s)
	I1004 02:49:00.513110    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.501488417s)
	I1004 02:49:00.513622    8328 addons.go:475] Verifying addon registry=true in "addons-561541"
	I1004 02:49:00.513154    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.424516771s)
	I1004 02:49:00.513241    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.315707328s)
	I1004 02:49:00.514727    8328 addons.go:475] Verifying addon metrics-server=true in "addons-561541"
	I1004 02:49:00.513271    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/logviewer-dp-and-svc.yaml -f /etc/kubernetes/addons/logviewer-rbac.yaml: (5.289513066s)
	I1004 02:49:00.513308    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.222150075s)
	I1004 02:49:00.513378    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.031967949s)
	W1004 02:49:00.515492    8328 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1004 02:49:00.515517    8328 retry.go:31] will retry after 133.493191ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1004 02:49:00.513436    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.732145135s)
	I1004 02:49:00.516162    8328 out.go:177] * Verifying ingress addon...
	I1004 02:49:00.518137    8328 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-561541 service yakd-dashboard -n yakd-dashboard
	
	I1004 02:49:00.518141    8328 out.go:177] * Verifying registry addon...
	I1004 02:49:00.521530    8328 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1004 02:49:00.521542    8328 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1004 02:49:00.547787    8328 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1004 02:49:00.547820    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:00.549021    8328 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1004 02:49:00.549044    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:00.649189    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1004 02:49:00.879709    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:00.999883    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.126254053s)
	I1004 02:49:00.999924    8328 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-561541"
	I1004 02:49:01.003117    8328 out.go:177] * Verifying csi-hostpath-driver addon...
	I1004 02:49:01.006679    8328 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1004 02:49:01.079290    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:01.080066    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:01.081699    8328 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1004 02:49:01.081723    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:01.529227    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:01.529694    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:01.530504    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:01.983264    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.333994147s)
	I1004 02:49:02.012062    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:02.034699    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:02.036200    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:02.517843    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:02.527982    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:02.529546    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:03.010440    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:03.025761    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:03.027528    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:03.356694    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:03.511966    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:03.613197    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:03.613724    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:04.011402    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:04.029091    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:04.030733    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:04.511737    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:04.515000    8328 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1004 02:49:04.515100    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:49:04.536194    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:49:04.622437    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:04.622767    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:04.652340    8328 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1004 02:49:04.669570    8328 addons.go:234] Setting addon gcp-auth=true in "addons-561541"
	I1004 02:49:04.669618    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:49:04.670076    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:49:04.685936    8328 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1004 02:49:04.685991    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:49:04.720743    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:49:04.847183    8328 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1004 02:49:04.850577    8328 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1004 02:49:04.853084    8328 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1004 02:49:04.853106    8328 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1004 02:49:04.887726    8328 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1004 02:49:04.887753    8328 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1004 02:49:04.910893    8328 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1004 02:49:04.910963    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1004 02:49:04.931574    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1004 02:49:05.014643    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:05.026749    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:05.027820    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:05.372402    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:05.531671    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:05.533353    8328 addons.go:475] Verifying addon gcp-auth=true in "addons-561541"
	I1004 02:49:05.538199    8328 out.go:177] * Verifying gcp-auth addon...
	I1004 02:49:05.541807    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:05.541928    8328 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1004 02:49:05.626783    8328 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1004 02:49:05.626808    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:05.627058    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:06.010037    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:06.026808    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:06.027619    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:06.046300    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:06.510804    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:06.525508    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:06.526349    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:06.545412    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:07.009954    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:07.025167    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:07.026419    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:07.046477    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:07.510734    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:07.525475    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:07.526507    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:07.545626    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:07.855323    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:08.010990    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:08.025928    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:08.026663    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:08.045565    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:08.511112    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:08.525393    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:08.526257    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:08.545186    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:09.010156    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:09.025819    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:09.026678    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:09.046423    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:09.510254    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:09.525715    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:09.526715    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:09.544852    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:09.855665    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:10.015970    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:10.027013    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:10.028056    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:10.045795    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:10.510522    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:10.524848    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:10.525964    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:10.545075    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:11.010739    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:11.025236    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:11.026514    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:11.046557    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:11.510263    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:11.525190    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:11.526203    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:11.545569    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:11.856129    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:12.009920    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:12.025713    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:12.026717    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:12.045009    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:12.510614    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:12.526163    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:12.526827    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:12.545130    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:13.010380    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:13.025839    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:13.026568    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:13.046332    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:13.510801    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:13.525595    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:13.526311    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:13.545237    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:14.010196    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:14.025766    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:14.026781    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:14.044980    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:14.356359    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:14.510075    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:14.526191    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:14.526191    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:14.544867    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:15.009998    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:15.034208    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:15.035493    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:15.047743    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:15.510199    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:15.525979    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:15.526742    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:15.545731    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:16.010598    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:16.025273    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:16.026280    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:16.045508    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:16.357045    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:16.509935    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:16.525435    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:16.526168    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:16.545285    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:17.010965    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:17.025728    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:17.027056    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:17.045407    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:17.510826    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:17.525774    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:17.526632    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:17.545682    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:18.009931    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:18.025975    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:18.026670    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:18.045113    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:18.511234    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:18.524862    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:18.526011    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:18.545415    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:18.856244    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:19.010527    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:19.025169    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:19.025781    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:19.045501    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:19.510470    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:19.525801    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:19.526344    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:19.545876    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:20.011427    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:20.026576    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:20.027715    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:20.045339    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:20.510476    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:20.525894    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:20.526556    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:20.544789    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:21.010478    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:21.026517    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:21.026712    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:21.046557    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:21.355486    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:21.510538    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:21.525130    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:21.525971    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:21.544906    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:22.010276    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:22.026443    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:22.027683    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:22.045372    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:22.510580    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:22.525464    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:22.526797    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:22.544978    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:23.010711    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:23.026806    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:23.027014    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:23.045825    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:23.356069    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:23.510659    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:23.525430    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:23.526399    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:23.545349    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:24.010215    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:24.025064    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:24.026090    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:24.046076    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:24.510883    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:24.525928    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:24.527281    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:24.545588    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:25.010414    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:25.026528    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:25.027131    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:25.046308    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:25.510642    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:25.526422    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:25.527130    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:25.545388    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:25.856293    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:26.010490    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:26.025660    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:26.026411    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:26.045694    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:26.509918    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:26.526201    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:26.526975    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:26.545787    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:27.010439    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:27.025934    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:27.026830    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:27.046014    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:27.510485    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:27.525790    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:27.526739    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:27.544871    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:28.010404    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:28.025938    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:28.027054    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:28.045239    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:28.355793    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:28.510966    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:28.526016    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:28.526943    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:28.545331    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:29.010463    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:29.025999    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:29.026262    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:29.045608    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:29.510310    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:29.525716    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:29.526440    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:29.545556    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:30.011411    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:30.037609    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:30.039215    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:30.046478    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:30.356522    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:30.510484    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:30.525761    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:30.526599    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:30.545627    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:31.010151    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:31.025763    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:31.028131    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:31.045507    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:31.510704    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:31.525681    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:31.526399    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:31.545558    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:32.010997    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:32.026269    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:32.027077    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:32.046364    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:32.510051    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:32.525140    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:32.526324    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:32.545371    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:32.855916    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:33.010328    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:33.025471    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:33.026133    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:33.045887    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:33.510407    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:33.525802    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:33.527114    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:33.545085    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:34.009843    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:34.025666    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:34.026897    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:34.045084    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:34.511059    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:34.525256    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:34.526043    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:34.545183    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:35.011280    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:35.025496    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:35.026360    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:35.046086    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:35.356404    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:35.511806    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:35.526080    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:35.526513    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:35.546020    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:36.010030    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:36.026382    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:36.026979    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:36.045656    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:36.509957    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:36.526020    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:36.526275    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:36.545100    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:37.011140    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:37.027109    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:37.027959    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:37.046024    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:37.510865    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:37.525927    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:37.526468    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:37.545227    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:37.856229    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:38.010588    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:38.025083    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:38.026153    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:38.045398    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:38.510926    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:38.525552    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:38.526411    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:38.545475    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:39.010742    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:39.025692    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:39.026616    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:39.047185    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:39.510892    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:39.526060    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:39.526747    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:39.545103    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:40.015512    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:40.037303    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:40.042268    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:40.123777    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:40.382608    8328 node_ready.go:49] node "addons-561541" has status "Ready":"True"
	I1004 02:49:40.382688    8328 node_ready.go:38] duration metric: took 46.030250639s for node "addons-561541" to be "Ready" ...
	I1004 02:49:40.382713    8328 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:49:40.411056    8328 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l72ll" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:40.564091    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:40.566225    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:40.567527    8328 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1004 02:49:40.567646    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:40.567775    8328 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1004 02:49:40.567795    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:41.026954    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:41.043416    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:41.044137    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:41.057464    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:41.418794    8328 pod_ready.go:93] pod "coredns-7c65d6cfc9-l72ll" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:41.418821    8328 pod_ready.go:82] duration metric: took 1.007677628s for pod "coredns-7c65d6cfc9-l72ll" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.418873    8328 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-561541" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.424590    8328 pod_ready.go:93] pod "etcd-addons-561541" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:41.424666    8328 pod_ready.go:82] duration metric: took 5.776713ms for pod "etcd-addons-561541" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.424685    8328 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-561541" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.430286    8328 pod_ready.go:93] pod "kube-apiserver-addons-561541" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:41.430310    8328 pod_ready.go:82] duration metric: took 5.615673ms for pod "kube-apiserver-addons-561541" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.430323    8328 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-561541" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.435657    8328 pod_ready.go:93] pod "kube-controller-manager-addons-561541" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:41.435684    8328 pod_ready.go:82] duration metric: took 5.351527ms for pod "kube-controller-manager-addons-561541" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.435707    8328 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hrkf9" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.516513    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:41.526649    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:41.527971    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:41.545114    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:41.557161    8328 pod_ready.go:93] pod "kube-proxy-hrkf9" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:41.557185    8328 pod_ready.go:82] duration metric: took 121.46867ms for pod "kube-proxy-hrkf9" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.557197    8328 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-561541" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.957390    8328 pod_ready.go:93] pod "kube-scheduler-addons-561541" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:41.957463    8328 pod_ready.go:82] duration metric: took 400.257187ms for pod "kube-scheduler-addons-561541" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.957494    8328 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:42.013216    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:42.030749    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:42.033979    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:42.046361    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:42.516015    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:42.528872    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:42.529311    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:42.548338    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:43.012228    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:43.029252    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:43.031428    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:43.048306    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:43.513374    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:43.528329    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:43.529330    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:43.545957    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:43.963614    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:44.014595    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:44.027763    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:44.028423    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:44.045435    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:44.511992    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:44.526213    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:44.527450    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:44.545988    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:45.012914    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:45.029795    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:45.031520    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:45.047816    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:45.511893    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:45.527616    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:45.528441    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:45.546107    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:45.965001    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:46.012282    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:46.029829    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:46.032437    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:46.046301    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:46.512667    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:46.529958    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:46.532771    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:46.546591    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:47.011226    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:47.027489    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:47.028463    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:47.045696    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:47.513065    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:47.528687    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:47.530187    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:47.545627    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:47.967283    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:48.013463    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:48.029916    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:48.030983    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:48.046552    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:48.513932    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:48.530385    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:48.533138    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:48.546201    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:49.011620    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:49.034232    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:49.035830    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:49.051166    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:49.513049    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:49.529279    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:49.531848    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:49.545973    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:50.023183    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:50.030314    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:50.032018    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:50.047274    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:50.465624    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:50.512486    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:50.528161    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:50.528852    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:50.545585    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:51.012170    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:51.027019    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:51.028228    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:51.046451    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:51.513302    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:51.527383    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:51.528889    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:51.545723    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:52.012776    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:52.027668    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:52.029019    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:52.045548    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:52.512246    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:52.526126    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:52.528222    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:52.545348    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:52.964362    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:53.012235    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:53.028072    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:53.030090    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:53.046046    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:53.514737    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:53.533037    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:53.533981    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:53.545529    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:54.011810    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:54.030174    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:54.031513    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:54.046924    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:54.511634    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:54.527201    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:54.528278    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:54.545874    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:54.964745    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:55.012158    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:55.028052    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:55.029695    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:55.046695    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:55.512977    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:55.526087    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:55.527361    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:55.546041    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:56.011883    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:56.025797    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:56.028531    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:56.045756    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:56.511897    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:56.525664    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:56.526307    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:56.545713    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:56.965963    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:57.012552    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:57.027768    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:57.028541    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:57.048011    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:57.512756    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:57.527895    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:57.528350    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:57.545410    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:58.012358    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:58.031770    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:58.033137    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:58.046211    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:58.512190    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:58.526470    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:58.527733    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:58.545899    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:59.011851    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:59.026790    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:59.028646    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:59.046032    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:59.471076    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:59.535361    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:59.542920    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:59.548064    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:59.556709    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:00.018518    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:00.029291    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:00.030801    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:00.052886    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:00.511829    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:00.527281    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:00.528522    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:00.546340    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:01.011827    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:01.032324    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:01.033615    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:01.051637    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:01.511851    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:01.533988    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:01.535320    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:01.545983    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:01.966238    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:02.012092    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:02.028097    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:02.029824    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:02.049079    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:02.512111    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:02.526621    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:02.526916    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:02.545844    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:03.011709    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:03.026388    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:03.027450    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:03.046663    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:03.521995    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:03.527796    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:03.529478    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:03.552946    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:03.973502    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:04.012581    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:04.027713    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:04.029311    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:04.045755    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:04.513690    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:04.529132    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:04.530128    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:04.612657    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:05.012110    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:05.026873    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:05.027496    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:05.045432    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:05.512066    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:05.526274    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:05.527269    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:05.545644    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:06.012469    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:06.028507    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:06.029899    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:06.046550    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:06.466818    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:06.512451    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:06.530120    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:06.532449    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:06.546602    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:07.012528    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:07.026183    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:07.026744    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:07.046173    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:07.512194    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:07.526261    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:07.527287    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:07.545732    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:08.012432    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:08.029605    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:08.035532    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:08.046656    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:08.512593    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:08.526621    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:08.528155    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:08.547423    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:08.969775    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:09.011729    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:09.027288    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:09.029037    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:09.045616    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:09.512350    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:09.533352    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:09.534730    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:09.612495    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:10.012719    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:10.030173    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:10.030758    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:10.045728    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:10.512121    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:10.526483    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:10.527719    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:10.546223    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:11.011944    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:11.025975    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:11.026460    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:11.046175    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:11.464194    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:11.511674    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:11.526150    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:11.527476    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:11.545623    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:12.016136    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:12.121075    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:12.121426    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:12.123391    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:12.514011    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:12.528538    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:12.529581    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:12.547275    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:13.012068    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:13.027284    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:13.028066    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:13.045943    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:13.464815    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:13.521784    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:13.540043    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:13.542168    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:13.559288    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:14.012932    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:14.031008    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:14.032956    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:14.046149    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:14.512308    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:14.529727    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:14.531132    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:14.545587    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:15.016505    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:15.043212    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:15.047887    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:15.050602    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:15.469154    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:15.512533    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:15.527081    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:15.529221    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:15.555002    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:16.012128    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:16.029375    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:16.030006    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:16.048582    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:16.512096    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:16.527515    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:16.529098    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:16.546056    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:17.013505    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:17.028133    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:17.029488    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:17.045705    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:17.512448    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:17.525985    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:17.526559    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:17.545164    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:17.963071    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:18.011935    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:18.027216    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:18.028014    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:18.045715    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:18.512198    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:18.527621    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:18.528388    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:18.546035    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:19.012538    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:19.027138    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:19.029107    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:19.049247    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:19.515245    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:19.528243    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:19.528933    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:19.612128    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:19.974572    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:20.014185    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:20.031069    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:20.034181    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:20.113988    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:20.512045    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:20.525634    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:20.526324    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:20.545780    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:21.011316    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:21.038255    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:21.039487    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:21.046681    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:21.512774    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:21.526238    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:21.527331    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:21.611218    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:22.012871    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:22.026396    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:22.026993    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:22.045072    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:22.465059    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:22.512249    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:22.527076    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:22.529809    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:22.546302    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:23.013252    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:23.029859    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:23.031069    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:23.050464    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:23.512727    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:23.527887    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:23.528493    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:23.545794    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:24.013248    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:24.035854    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:24.037042    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:24.050520    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:24.511961    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:24.525382    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:24.526438    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:24.545577    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:24.964317    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:25.011703    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:25.026649    8328 kapi.go:107] duration metric: took 1m24.505099943s to wait for kubernetes.io/minikube-addons=registry ...
	I1004 02:50:25.030573    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:25.046129    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:25.512084    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:25.526441    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:25.545818    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:26.011391    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:26.026758    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:26.046036    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:26.514027    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:26.527482    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:26.545867    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:26.965224    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:27.021182    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:27.029360    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:27.048961    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:27.519958    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:27.531205    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:27.555003    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:28.013433    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:28.027350    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:28.046215    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:28.513245    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:28.526668    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:28.546119    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:28.966977    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:29.012357    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:29.027809    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:29.047183    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:29.512235    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:29.526495    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:29.545755    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:30.016445    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:30.048081    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:30.049866    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:30.512919    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:30.527214    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:30.548261    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:31.012402    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:31.026801    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:31.046201    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:31.468779    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:31.511890    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:31.528259    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:31.545648    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:32.012953    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:32.025968    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:32.045260    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:32.514676    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:32.526333    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:32.615633    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:33.012108    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:33.111274    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:33.112761    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:33.511802    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:33.526351    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:33.545326    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:33.964291    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:34.012245    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:34.025871    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:34.045851    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:34.512665    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:34.526198    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:34.545566    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:35.012501    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:35.033436    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:35.046398    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:35.511832    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:35.525852    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:35.544971    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:36.018851    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:36.028975    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:36.050153    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:36.466267    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:36.514624    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:36.526158    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:36.544888    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:37.012270    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:37.026672    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:37.047216    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:37.512992    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:37.526023    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:37.547093    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:38.012401    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:38.027677    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:38.046498    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:38.471627    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:38.513963    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:38.526523    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:38.545758    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:39.018575    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:39.025757    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:39.046753    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:39.513344    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:39.526005    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:39.545749    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:40.031853    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:40.032073    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:40.045855    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:40.512528    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:40.527260    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:40.545443    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:40.963697    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:41.021267    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:41.026197    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:41.046550    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:41.511470    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:41.527030    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:41.546093    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:42.011538    8328 kapi.go:107] duration metric: took 1m41.004857372s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1004 02:50:42.026490    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:42.045421    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:42.526841    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:42.546135    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:42.963754    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:43.025960    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:43.051253    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:43.525524    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:43.545635    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:44.027046    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:44.045167    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:44.526087    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:44.544968    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:44.964098    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:45.038146    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:45.046479    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:45.525956    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:45.545462    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:46.026075    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:46.045133    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:46.526163    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:46.545077    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:46.964847    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:47.028073    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:47.045660    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:47.526944    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:47.547135    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:48.029196    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:48.046145    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:48.527796    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:48.547072    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:49.031299    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:49.127481    8328 kapi.go:107] duration metric: took 1m43.585550454s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1004 02:50:49.129332    8328 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-561541 cluster.
	I1004 02:50:49.131243    8328 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1004 02:50:49.132780    8328 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1004 02:50:49.464079    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:49.530083    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:50.032164    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:50.485951    8328 pod_ready.go:93] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"True"
	I1004 02:50:50.485978    8328 pod_ready.go:82] duration metric: took 1m8.528462024s for pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace to be "Ready" ...
	I1004 02:50:50.485990    8328 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-5nsmh" in "kube-system" namespace to be "Ready" ...
	I1004 02:50:50.495203    8328 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-5nsmh" in "kube-system" namespace has status "Ready":"True"
	I1004 02:50:50.495230    8328 pod_ready.go:82] duration metric: took 9.231804ms for pod "nvidia-device-plugin-daemonset-5nsmh" in "kube-system" namespace to be "Ready" ...
	I1004 02:50:50.495254    8328 pod_ready.go:39] duration metric: took 1m10.112497833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:50:50.495278    8328 api_server.go:52] waiting for apiserver process to appear ...
	I1004 02:50:50.495313    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 02:50:50.495380    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 02:50:50.545764    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:50.616884    8328 cri.go:89] found id: "94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4"
	I1004 02:50:50.616908    8328 cri.go:89] found id: ""
	I1004 02:50:50.616915    8328 logs.go:282] 1 containers: [94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4]
	I1004 02:50:50.616976    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:50:50.639179    8328 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 02:50:50.639255    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 02:50:50.977637    8328 cri.go:89] found id: "ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab"
	I1004 02:50:50.977662    8328 cri.go:89] found id: ""
	I1004 02:50:50.977670    8328 logs.go:282] 1 containers: [ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab]
	I1004 02:50:50.977732    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:50:50.981391    8328 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 02:50:50.981464    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 02:50:51.029735    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:51.121360    8328 cri.go:89] found id: "18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575"
	I1004 02:50:51.121387    8328 cri.go:89] found id: ""
	I1004 02:50:51.121395    8328 logs.go:282] 1 containers: [18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575]
	I1004 02:50:51.121450    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:50:51.128678    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 02:50:51.128751    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 02:50:51.367292    8328 cri.go:89] found id: "170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab"
	I1004 02:50:51.367311    8328 cri.go:89] found id: ""
	I1004 02:50:51.367318    8328 logs.go:282] 1 containers: [170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab]
	I1004 02:50:51.367371    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:50:51.383474    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 02:50:51.383547    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 02:50:51.470791    8328 cri.go:89] found id: "c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f"
	I1004 02:50:51.470813    8328 cri.go:89] found id: ""
	I1004 02:50:51.470821    8328 logs.go:282] 1 containers: [c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f]
	I1004 02:50:51.470874    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:50:51.474792    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 02:50:51.474876    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 02:50:51.526658    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:51.590749    8328 cri.go:89] found id: "6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10"
	I1004 02:50:51.590772    8328 cri.go:89] found id: ""
	I1004 02:50:51.590781    8328 logs.go:282] 1 containers: [6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10]
	I1004 02:50:51.590834    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:50:51.605782    8328 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 02:50:51.605856    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 02:50:51.709376    8328 cri.go:89] found id: "11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e"
	I1004 02:50:51.709402    8328 cri.go:89] found id: ""
	I1004 02:50:51.709410    8328 logs.go:282] 1 containers: [11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e]
	I1004 02:50:51.709465    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:50:51.714751    8328 logs.go:123] Gathering logs for describe nodes ...
	I1004 02:50:51.714777    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 02:50:52.040764    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:52.057444    8328 logs.go:123] Gathering logs for kube-proxy [c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f] ...
	I1004 02:50:52.057476    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f"
	I1004 02:50:52.229777    8328 logs.go:123] Gathering logs for kindnet [11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e] ...
	I1004 02:50:52.229805    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e"
	I1004 02:50:52.354308    8328 logs.go:123] Gathering logs for CRI-O ...
	I1004 02:50:52.354336    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 02:50:52.472238    8328 logs.go:123] Gathering logs for kubelet ...
	I1004 02:50:52.472274    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 02:50:52.538540    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1004 02:50:52.556669    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: W1004 02:48:57.726567    1504 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-561541' and this object
	W1004 02:50:52.556913    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: E1004 02:48:57.726640    1504 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:50:52.557090    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: W1004 02:48:57.731935    1504 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-561541' and this object
	W1004 02:50:52.557413    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: E1004 02:48:57.731984    1504 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:50:52.566276    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.077336    1504 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-561541" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-561541' and this object
	W1004 02:50:52.566493    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077383    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:50:52.566678    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.077463    1504 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-561541' and this object
	W1004 02:50:52.566900    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077498    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:50:52.567080    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.114008    1504 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-561541' and this object
	W1004 02:50:52.567309    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.114055    1504 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	I1004 02:50:52.606705    8328 logs.go:123] Gathering logs for dmesg ...
	I1004 02:50:52.606739    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 02:50:52.637501    8328 logs.go:123] Gathering logs for kube-apiserver [94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4] ...
	I1004 02:50:52.637526    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4"
	I1004 02:50:52.762777    8328 logs.go:123] Gathering logs for etcd [ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab] ...
	I1004 02:50:52.762857    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab"
	I1004 02:50:52.850377    8328 logs.go:123] Gathering logs for coredns [18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575] ...
	I1004 02:50:52.850459    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575"
	I1004 02:50:52.903295    8328 logs.go:123] Gathering logs for kube-scheduler [170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab] ...
	I1004 02:50:52.903384    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab"
	I1004 02:50:52.981682    8328 logs.go:123] Gathering logs for kube-controller-manager [6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10] ...
	I1004 02:50:52.981758    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10"
	I1004 02:50:53.026628    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:53.092318    8328 logs.go:123] Gathering logs for container status ...
	I1004 02:50:53.092354    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 02:50:53.195358    8328 out.go:358] Setting ErrFile to fd 2...
	I1004 02:50:53.195385    8328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1004 02:50:53.195435    8328 out.go:270] X Problems detected in kubelet:
	W1004 02:50:53.195452    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077383    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:50:53.195463    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.077463    1504 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-561541' and this object
	W1004 02:50:53.195474    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077498    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:50:53.195483    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.114008    1504 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-561541' and this object
	W1004 02:50:53.195489    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.114055    1504 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	I1004 02:50:53.195495    8328 out.go:358] Setting ErrFile to fd 2...
	I1004 02:50:53.195501    8328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:50:53.526044    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:54.027034    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:54.527896    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:55.027090    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:55.526608    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:56.031668    8328 kapi.go:107] duration metric: took 1m55.510137071s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1004 02:50:56.033861    8328 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, logviewer, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1004 02:50:56.035371    8328 addons.go:510] duration metric: took 2m2.34435321s for enable addons: enabled=[default-storageclass storage-provisioner nvidia-device-plugin cloud-spanner ingress-dns metrics-server logviewer inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1004 02:51:03.196777    8328 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 02:51:03.210660    8328 api_server.go:72] duration metric: took 2m9.520011882s to wait for apiserver process to appear ...
	I1004 02:51:03.210687    8328 api_server.go:88] waiting for apiserver healthz status ...
	I1004 02:51:03.210721    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 02:51:03.210783    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 02:51:03.250166    8328 cri.go:89] found id: "94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4"
	I1004 02:51:03.250193    8328 cri.go:89] found id: ""
	I1004 02:51:03.250201    8328 logs.go:282] 1 containers: [94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4]
	I1004 02:51:03.250255    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:03.253725    8328 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 02:51:03.253797    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 02:51:03.293934    8328 cri.go:89] found id: "ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab"
	I1004 02:51:03.293956    8328 cri.go:89] found id: ""
	I1004 02:51:03.293964    8328 logs.go:282] 1 containers: [ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab]
	I1004 02:51:03.294023    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:03.297421    8328 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 02:51:03.297493    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 02:51:03.335321    8328 cri.go:89] found id: "18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575"
	I1004 02:51:03.335342    8328 cri.go:89] found id: ""
	I1004 02:51:03.335349    8328 logs.go:282] 1 containers: [18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575]
	I1004 02:51:03.335410    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:03.338795    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 02:51:03.338873    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 02:51:03.379250    8328 cri.go:89] found id: "170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab"
	I1004 02:51:03.379273    8328 cri.go:89] found id: ""
	I1004 02:51:03.379282    8328 logs.go:282] 1 containers: [170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab]
	I1004 02:51:03.379336    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:03.382822    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 02:51:03.382894    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 02:51:03.421723    8328 cri.go:89] found id: "c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f"
	I1004 02:51:03.421748    8328 cri.go:89] found id: ""
	I1004 02:51:03.421756    8328 logs.go:282] 1 containers: [c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f]
	I1004 02:51:03.421812    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:03.425066    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 02:51:03.425138    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 02:51:03.462203    8328 cri.go:89] found id: "6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10"
	I1004 02:51:03.462236    8328 cri.go:89] found id: ""
	I1004 02:51:03.462244    8328 logs.go:282] 1 containers: [6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10]
	I1004 02:51:03.462300    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:03.465754    8328 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 02:51:03.465825    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 02:51:03.504988    8328 cri.go:89] found id: "11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e"
	I1004 02:51:03.505011    8328 cri.go:89] found id: ""
	I1004 02:51:03.505019    8328 logs.go:282] 1 containers: [11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e]
	I1004 02:51:03.505076    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:03.508568    8328 logs.go:123] Gathering logs for kube-scheduler [170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab] ...
	I1004 02:51:03.508602    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab"
	I1004 02:51:03.552151    8328 logs.go:123] Gathering logs for kube-controller-manager [6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10] ...
	I1004 02:51:03.552181    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10"
	I1004 02:51:03.624643    8328 logs.go:123] Gathering logs for container status ...
	I1004 02:51:03.624679    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 02:51:03.685963    8328 logs.go:123] Gathering logs for kubelet ...
	I1004 02:51:03.685990    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1004 02:51:03.747485    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: W1004 02:48:57.726567    1504 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-561541' and this object
	W1004 02:51:03.747726    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: E1004 02:48:57.726640    1504 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:03.747905    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: W1004 02:48:57.731935    1504 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-561541' and this object
	W1004 02:51:03.748120    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: E1004 02:48:57.731984    1504 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:03.756652    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.077336    1504 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-561541" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-561541' and this object
	W1004 02:51:03.756858    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077383    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:03.757039    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.077463    1504 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-561541' and this object
	W1004 02:51:03.757266    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077498    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:03.757445    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.114008    1504 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-561541' and this object
	W1004 02:51:03.757667    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.114055    1504 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	I1004 02:51:03.795796    8328 logs.go:123] Gathering logs for etcd [ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab] ...
	I1004 02:51:03.795817    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab"
	I1004 02:51:03.856192    8328 logs.go:123] Gathering logs for coredns [18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575] ...
	I1004 02:51:03.856228    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575"
	I1004 02:51:03.908105    8328 logs.go:123] Gathering logs for kube-proxy [c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f] ...
	I1004 02:51:03.908142    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f"
	I1004 02:51:03.952594    8328 logs.go:123] Gathering logs for kindnet [11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e] ...
	I1004 02:51:03.952621    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e"
	I1004 02:51:03.994740    8328 logs.go:123] Gathering logs for CRI-O ...
	I1004 02:51:03.994767    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 02:51:04.088169    8328 logs.go:123] Gathering logs for dmesg ...
	I1004 02:51:04.088207    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 02:51:04.101739    8328 logs.go:123] Gathering logs for describe nodes ...
	I1004 02:51:04.101767    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 02:51:04.239828    8328 logs.go:123] Gathering logs for kube-apiserver [94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4] ...
	I1004 02:51:04.239865    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4"
	I1004 02:51:04.292880    8328 out.go:358] Setting ErrFile to fd 2...
	I1004 02:51:04.292907    8328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1004 02:51:04.292987    8328 out.go:270] X Problems detected in kubelet:
	W1004 02:51:04.293750    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077383    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:04.293773    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.077463    1504 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-561541' and this object
	W1004 02:51:04.293781    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077498    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:04.293788    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.114008    1504 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-561541' and this object
	W1004 02:51:04.293796    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.114055    1504 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	I1004 02:51:04.293809    8328 out.go:358] Setting ErrFile to fd 2...
	I1004 02:51:04.293825    8328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:51:14.295472    8328 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 02:51:14.302959    8328 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1004 02:51:14.304692    8328 api_server.go:141] control plane version: v1.31.1
	I1004 02:51:14.304717    8328 api_server.go:131] duration metric: took 11.094022067s to wait for apiserver health ...
	I1004 02:51:14.304735    8328 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 02:51:14.304758    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 02:51:14.304828    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 02:51:14.357990    8328 cri.go:89] found id: "94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4"
	I1004 02:51:14.358021    8328 cri.go:89] found id: ""
	I1004 02:51:14.358029    8328 logs.go:282] 1 containers: [94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4]
	I1004 02:51:14.358083    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:14.362819    8328 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 02:51:14.362892    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 02:51:14.416154    8328 cri.go:89] found id: "ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab"
	I1004 02:51:14.416179    8328 cri.go:89] found id: ""
	I1004 02:51:14.416188    8328 logs.go:282] 1 containers: [ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab]
	I1004 02:51:14.416240    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:14.419531    8328 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 02:51:14.419601    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 02:51:14.465469    8328 cri.go:89] found id: "18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575"
	I1004 02:51:14.465492    8328 cri.go:89] found id: ""
	I1004 02:51:14.465500    8328 logs.go:282] 1 containers: [18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575]
	I1004 02:51:14.465562    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:14.469176    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 02:51:14.469271    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 02:51:14.506010    8328 cri.go:89] found id: "170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab"
	I1004 02:51:14.506030    8328 cri.go:89] found id: ""
	I1004 02:51:14.506037    8328 logs.go:282] 1 containers: [170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab]
	I1004 02:51:14.506095    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:14.509521    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 02:51:14.509587    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 02:51:14.545799    8328 cri.go:89] found id: "c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f"
	I1004 02:51:14.545821    8328 cri.go:89] found id: ""
	I1004 02:51:14.545829    8328 logs.go:282] 1 containers: [c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f]
	I1004 02:51:14.545883    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:14.549163    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 02:51:14.549285    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 02:51:14.586324    8328 cri.go:89] found id: "6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10"
	I1004 02:51:14.586391    8328 cri.go:89] found id: ""
	I1004 02:51:14.586407    8328 logs.go:282] 1 containers: [6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10]
	I1004 02:51:14.586476    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:14.589894    8328 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 02:51:14.589988    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 02:51:14.627138    8328 cri.go:89] found id: "11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e"
	I1004 02:51:14.627161    8328 cri.go:89] found id: ""
	I1004 02:51:14.627168    8328 logs.go:282] 1 containers: [11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e]
	I1004 02:51:14.627241    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:14.630613    8328 logs.go:123] Gathering logs for etcd [ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab] ...
	I1004 02:51:14.630638    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab"
	I1004 02:51:14.702143    8328 logs.go:123] Gathering logs for coredns [18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575] ...
	I1004 02:51:14.702175    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575"
	I1004 02:51:14.741646    8328 logs.go:123] Gathering logs for kube-proxy [c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f] ...
	I1004 02:51:14.741673    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f"
	I1004 02:51:14.779592    8328 logs.go:123] Gathering logs for kube-controller-manager [6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10] ...
	I1004 02:51:14.779623    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10"
	I1004 02:51:14.865327    8328 logs.go:123] Gathering logs for kindnet [11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e] ...
	I1004 02:51:14.865407    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e"
	I1004 02:51:14.910177    8328 logs.go:123] Gathering logs for dmesg ...
	I1004 02:51:14.910210    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 02:51:14.929759    8328 logs.go:123] Gathering logs for describe nodes ...
	I1004 02:51:14.929789    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 02:51:15.089176    8328 logs.go:123] Gathering logs for kube-apiserver [94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4] ...
	I1004 02:51:15.090422    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4"
	I1004 02:51:15.179936    8328 logs.go:123] Gathering logs for container status ...
	I1004 02:51:15.179970    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 02:51:15.245521    8328 logs.go:123] Gathering logs for kubelet ...
	I1004 02:51:15.245551    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1004 02:51:15.313004    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: W1004 02:48:57.726567    1504 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-561541' and this object
	W1004 02:51:15.313287    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: E1004 02:48:57.726640    1504 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:15.313486    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: W1004 02:48:57.731935    1504 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-561541' and this object
	W1004 02:51:15.313707    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: E1004 02:48:57.731984    1504 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:15.322283    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.077336    1504 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-561541" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-561541' and this object
	W1004 02:51:15.322498    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077383    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:15.322680    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.077463    1504 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-561541' and this object
	W1004 02:51:15.322903    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077498    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:15.323085    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.114008    1504 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-561541' and this object
	W1004 02:51:15.323305    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.114055    1504 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	I1004 02:51:15.363628    8328 logs.go:123] Gathering logs for kube-scheduler [170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab] ...
	I1004 02:51:15.363655    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab"
	I1004 02:51:15.407886    8328 logs.go:123] Gathering logs for CRI-O ...
	I1004 02:51:15.407920    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 02:51:15.501398    8328 out.go:358] Setting ErrFile to fd 2...
	I1004 02:51:15.501427    8328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1004 02:51:15.501501    8328 out.go:270] X Problems detected in kubelet:
	W1004 02:51:15.501516    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077383    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:15.501533    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.077463    1504 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-561541' and this object
	W1004 02:51:15.501556    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077498    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:15.501571    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.114008    1504 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-561541' and this object
	W1004 02:51:15.501596    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.114055    1504 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	I1004 02:51:15.501603    8328 out.go:358] Setting ErrFile to fd 2...
	I1004 02:51:15.501616    8328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:51:25.513781    8328 system_pods.go:59] 19 kube-system pods found
	I1004 02:51:25.513822    8328 system_pods.go:61] "coredns-7c65d6cfc9-l72ll" [c7bdc99e-d5d1-449c-968d-9cbcbe5d3883] Running
	I1004 02:51:25.513829    8328 system_pods.go:61] "csi-hostpath-attacher-0" [5d9da269-be6d-4ac1-bdfb-dc06753ed143] Running
	I1004 02:51:25.513835    8328 system_pods.go:61] "csi-hostpath-resizer-0" [a74cf24a-1485-4e39-8fe2-61eb95046f16] Running
	I1004 02:51:25.513840    8328 system_pods.go:61] "csi-hostpathplugin-2kf2t" [d2bbfca4-9688-425d-bdcd-97cc2c84c619] Running
	I1004 02:51:25.513845    8328 system_pods.go:61] "etcd-addons-561541" [a0edb3d8-1cb5-406e-aee7-b7a3163a557d] Running
	I1004 02:51:25.513849    8328 system_pods.go:61] "kindnet-7tqxs" [23685d4c-c7f4-4c2a-bbd6-ab4f572a2a2a] Running
	I1004 02:51:25.513854    8328 system_pods.go:61] "kube-apiserver-addons-561541" [97914951-e72b-45ba-b901-1142d3c9b967] Running
	I1004 02:51:25.513858    8328 system_pods.go:61] "kube-controller-manager-addons-561541" [db37bb5d-abd2-4e6c-a469-58176fe06cb9] Running
	I1004 02:51:25.513868    8328 system_pods.go:61] "kube-ingress-dns-minikube" [40574e9c-4112-4693-9361-ac3a76c1f048] Running
	I1004 02:51:25.513872    8328 system_pods.go:61] "kube-proxy-hrkf9" [6c693613-dcb1-4111-87d2-936d6b82b963] Running
	I1004 02:51:25.513879    8328 system_pods.go:61] "kube-scheduler-addons-561541" [48973e95-fe2f-4074-a1bf-7afb482c6609] Running
	I1004 02:51:25.513883    8328 system_pods.go:61] "logviewer-7c79c8bcc9-2b554" [75a7f403-12b6-4f98-b0af-8bf7c3aa0ab1] Running
	I1004 02:51:25.513894    8328 system_pods.go:61] "metrics-server-84c5f94fbc-4hhst" [7577c62c-151a-4a09-91f6-abd270367e65] Running
	I1004 02:51:25.513900    8328 system_pods.go:61] "nvidia-device-plugin-daemonset-5nsmh" [417c82a7-a3be-4373-b14a-9d52e4aaa1d2] Running
	I1004 02:51:25.513905    8328 system_pods.go:61] "registry-66c9cd494c-lc5j7" [d1434ec1-9246-4eec-97cd-0ae38734e96e] Running
	I1004 02:51:25.513915    8328 system_pods.go:61] "registry-proxy-2kl22" [ee49d77e-84c1-4b75-b458-f901291a1eb8] Running
	I1004 02:51:25.513919    8328 system_pods.go:61] "snapshot-controller-56fcc65765-wwg4w" [706139ae-1a4c-44b6-b2bd-14c48c7d9286] Running
	I1004 02:51:25.513923    8328 system_pods.go:61] "snapshot-controller-56fcc65765-x9vhd" [3b89f1fe-763d-49f4-b65e-d0536fdd2293] Running
	I1004 02:51:25.513929    8328 system_pods.go:61] "storage-provisioner" [15c3949d-928d-4cd9-9c7f-828971d88260] Running
	I1004 02:51:25.513936    8328 system_pods.go:74] duration metric: took 11.209194297s to wait for pod list to return data ...
	I1004 02:51:25.513946    8328 default_sa.go:34] waiting for default service account to be created ...
	I1004 02:51:25.516771    8328 default_sa.go:45] found service account: "default"
	I1004 02:51:25.516795    8328 default_sa.go:55] duration metric: took 2.84218ms for default service account to be created ...
	I1004 02:51:25.516804    8328 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 02:51:25.527170    8328 system_pods.go:86] 19 kube-system pods found
	I1004 02:51:25.527249    8328 system_pods.go:89] "coredns-7c65d6cfc9-l72ll" [c7bdc99e-d5d1-449c-968d-9cbcbe5d3883] Running
	I1004 02:51:25.527268    8328 system_pods.go:89] "csi-hostpath-attacher-0" [5d9da269-be6d-4ac1-bdfb-dc06753ed143] Running
	I1004 02:51:25.527274    8328 system_pods.go:89] "csi-hostpath-resizer-0" [a74cf24a-1485-4e39-8fe2-61eb95046f16] Running
	I1004 02:51:25.527279    8328 system_pods.go:89] "csi-hostpathplugin-2kf2t" [d2bbfca4-9688-425d-bdcd-97cc2c84c619] Running
	I1004 02:51:25.527284    8328 system_pods.go:89] "etcd-addons-561541" [a0edb3d8-1cb5-406e-aee7-b7a3163a557d] Running
	I1004 02:51:25.527289    8328 system_pods.go:89] "kindnet-7tqxs" [23685d4c-c7f4-4c2a-bbd6-ab4f572a2a2a] Running
	I1004 02:51:25.527293    8328 system_pods.go:89] "kube-apiserver-addons-561541" [97914951-e72b-45ba-b901-1142d3c9b967] Running
	I1004 02:51:25.527298    8328 system_pods.go:89] "kube-controller-manager-addons-561541" [db37bb5d-abd2-4e6c-a469-58176fe06cb9] Running
	I1004 02:51:25.527324    8328 system_pods.go:89] "kube-ingress-dns-minikube" [40574e9c-4112-4693-9361-ac3a76c1f048] Running
	I1004 02:51:25.527335    8328 system_pods.go:89] "kube-proxy-hrkf9" [6c693613-dcb1-4111-87d2-936d6b82b963] Running
	I1004 02:51:25.527340    8328 system_pods.go:89] "kube-scheduler-addons-561541" [48973e95-fe2f-4074-a1bf-7afb482c6609] Running
	I1004 02:51:25.527344    8328 system_pods.go:89] "logviewer-7c79c8bcc9-2b554" [75a7f403-12b6-4f98-b0af-8bf7c3aa0ab1] Running
	I1004 02:51:25.527361    8328 system_pods.go:89] "metrics-server-84c5f94fbc-4hhst" [7577c62c-151a-4a09-91f6-abd270367e65] Running
	I1004 02:51:25.527372    8328 system_pods.go:89] "nvidia-device-plugin-daemonset-5nsmh" [417c82a7-a3be-4373-b14a-9d52e4aaa1d2] Running
	I1004 02:51:25.527376    8328 system_pods.go:89] "registry-66c9cd494c-lc5j7" [d1434ec1-9246-4eec-97cd-0ae38734e96e] Running
	I1004 02:51:25.527380    8328 system_pods.go:89] "registry-proxy-2kl22" [ee49d77e-84c1-4b75-b458-f901291a1eb8] Running
	I1004 02:51:25.527390    8328 system_pods.go:89] "snapshot-controller-56fcc65765-wwg4w" [706139ae-1a4c-44b6-b2bd-14c48c7d9286] Running
	I1004 02:51:25.527396    8328 system_pods.go:89] "snapshot-controller-56fcc65765-x9vhd" [3b89f1fe-763d-49f4-b65e-d0536fdd2293] Running
	I1004 02:51:25.527400    8328 system_pods.go:89] "storage-provisioner" [15c3949d-928d-4cd9-9c7f-828971d88260] Running
	I1004 02:51:25.527411    8328 system_pods.go:126] duration metric: took 10.600827ms to wait for k8s-apps to be running ...
	I1004 02:51:25.527423    8328 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 02:51:25.527509    8328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:51:25.539403    8328 system_svc.go:56] duration metric: took 11.970586ms WaitForService to wait for kubelet
	I1004 02:51:25.539435    8328 kubeadm.go:582] duration metric: took 2m31.848792494s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 02:51:25.539454    8328 node_conditions.go:102] verifying NodePressure condition ...
	I1004 02:51:25.543001    8328 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1004 02:51:25.543041    8328 node_conditions.go:123] node cpu capacity is 2
	I1004 02:51:25.543058    8328 node_conditions.go:105] duration metric: took 3.597644ms to run NodePressure ...
	I1004 02:51:25.543071    8328 start.go:241] waiting for startup goroutines ...
	I1004 02:51:25.543079    8328 start.go:246] waiting for cluster config update ...
	I1004 02:51:25.543097    8328 start.go:255] writing updated cluster config ...
	I1004 02:51:25.543864    8328 ssh_runner.go:195] Run: rm -f paused
	I1004 02:51:25.854517    8328 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 02:51:25.857954    8328 out.go:177] * Done! kubectl is now configured to use "addons-561541" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 04 03:02:28 addons-561541 crio[964]: time="2024-10-04 03:02:28.086196884Z" level=info msg="Started container" PID=14158 containerID=62a8b0a39d4c5cc649c9b58b6a08fc405512040f7a9afddf916abb04fdb94f43 description=default/busybox/busybox id=4120605e-8380-4c31-ace4-519f194cf129 name=/runtime.v1.RuntimeService/StartContainer sandboxID=03e8b76cf2ca654438a463dc0430b61f49fb38cee39d5f4f8913ec7583519685
	Oct 04 03:03:41 addons-561541 crio[964]: time="2024-10-04 03:03:41.383143119Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-n76qr/POD" id=34e139cf-5f54-4ef5-953b-1f13fb96c948 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 04 03:03:41 addons-561541 crio[964]: time="2024-10-04 03:03:41.383208357Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 04 03:03:41 addons-561541 crio[964]: time="2024-10-04 03:03:41.406014081Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-n76qr Namespace:default ID:70bcb1cfc17cbc9ddd96d2836c725b11a08cd605f14ddefd062a7d6cba7f5380 UID:b8f88554-e6b4-49c7-a48f-9a5770d29df2 NetNS:/var/run/netns/1dfc30ee-1386-4b0b-af0f-b20f6e9671b2 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 04 03:03:41 addons-561541 crio[964]: time="2024-10-04 03:03:41.406059569Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-n76qr to CNI network \"kindnet\" (type=ptp)"
	Oct 04 03:03:41 addons-561541 crio[964]: time="2024-10-04 03:03:41.416752335Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-n76qr Namespace:default ID:70bcb1cfc17cbc9ddd96d2836c725b11a08cd605f14ddefd062a7d6cba7f5380 UID:b8f88554-e6b4-49c7-a48f-9a5770d29df2 NetNS:/var/run/netns/1dfc30ee-1386-4b0b-af0f-b20f6e9671b2 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 04 03:03:41 addons-561541 crio[964]: time="2024-10-04 03:03:41.416958164Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-n76qr for CNI network kindnet (type=ptp)"
	Oct 04 03:03:41 addons-561541 crio[964]: time="2024-10-04 03:03:41.419605304Z" level=info msg="Ran pod sandbox 70bcb1cfc17cbc9ddd96d2836c725b11a08cd605f14ddefd062a7d6cba7f5380 with infra container: default/hello-world-app-55bf9c44b4-n76qr/POD" id=34e139cf-5f54-4ef5-953b-1f13fb96c948 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 04 03:03:41 addons-561541 crio[964]: time="2024-10-04 03:03:41.420879703Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e17299d2-1a36-42db-9dbb-02b78ee50e04 name=/runtime.v1.ImageService/ImageStatus
	Oct 04 03:03:41 addons-561541 crio[964]: time="2024-10-04 03:03:41.421086992Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=e17299d2-1a36-42db-9dbb-02b78ee50e04 name=/runtime.v1.ImageService/ImageStatus
	Oct 04 03:03:41 addons-561541 crio[964]: time="2024-10-04 03:03:41.422443095Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=09bac4a0-32fb-4901-8485-21b2eb3af9d9 name=/runtime.v1.ImageService/PullImage
	Oct 04 03:03:41 addons-561541 crio[964]: time="2024-10-04 03:03:41.425911066Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 04 03:03:41 addons-561541 crio[964]: time="2024-10-04 03:03:41.731586977Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 04 03:03:42 addons-561541 crio[964]: time="2024-10-04 03:03:42.570896965Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=09bac4a0-32fb-4901-8485-21b2eb3af9d9 name=/runtime.v1.ImageService/PullImage
	Oct 04 03:03:42 addons-561541 crio[964]: time="2024-10-04 03:03:42.571802576Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=1af8911c-b2a6-485d-b107-b2954d040f0e name=/runtime.v1.ImageService/ImageStatus
	Oct 04 03:03:42 addons-561541 crio[964]: time="2024-10-04 03:03:42.572535046Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=1af8911c-b2a6-485d-b107-b2954d040f0e name=/runtime.v1.ImageService/ImageStatus
	Oct 04 03:03:42 addons-561541 crio[964]: time="2024-10-04 03:03:42.575644265Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9d2fc1b6-0fb3-459d-a498-f6299c04aa74 name=/runtime.v1.ImageService/ImageStatus
	Oct 04 03:03:42 addons-561541 crio[964]: time="2024-10-04 03:03:42.577990014Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9d2fc1b6-0fb3-459d-a498-f6299c04aa74 name=/runtime.v1.ImageService/ImageStatus
	Oct 04 03:03:42 addons-561541 crio[964]: time="2024-10-04 03:03:42.578965818Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-n76qr/hello-world-app" id=b267a63e-38ea-442c-8136-e48b42b81a85 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 04 03:03:42 addons-561541 crio[964]: time="2024-10-04 03:03:42.579146729Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 04 03:03:42 addons-561541 crio[964]: time="2024-10-04 03:03:42.606923731Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ee9637b95aa52cc088ab06ff735d504a843742e4b269407d542fefdd94c929f7/merged/etc/passwd: no such file or directory"
	Oct 04 03:03:42 addons-561541 crio[964]: time="2024-10-04 03:03:42.607136551Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ee9637b95aa52cc088ab06ff735d504a843742e4b269407d542fefdd94c929f7/merged/etc/group: no such file or directory"
	Oct 04 03:03:42 addons-561541 crio[964]: time="2024-10-04 03:03:42.665056480Z" level=info msg="Created container 84287d5c8ebe4dba57e69def6222f18fe5000292327937bcb6227ebf778d8ac8: default/hello-world-app-55bf9c44b4-n76qr/hello-world-app" id=b267a63e-38ea-442c-8136-e48b42b81a85 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 04 03:03:42 addons-561541 crio[964]: time="2024-10-04 03:03:42.666080513Z" level=info msg="Starting container: 84287d5c8ebe4dba57e69def6222f18fe5000292327937bcb6227ebf778d8ac8" id=409ff465-ce51-4686-b4a4-72484b738abc name=/runtime.v1.RuntimeService/StartContainer
	Oct 04 03:03:42 addons-561541 crio[964]: time="2024-10-04 03:03:42.689906956Z" level=info msg="Started container" PID=14357 containerID=84287d5c8ebe4dba57e69def6222f18fe5000292327937bcb6227ebf778d8ac8 description=default/hello-world-app-55bf9c44b4-n76qr/hello-world-app id=409ff465-ce51-4686-b4a4-72484b738abc name=/runtime.v1.RuntimeService/StartContainer sandboxID=70bcb1cfc17cbc9ddd96d2836c725b11a08cd605f14ddefd062a7d6cba7f5380
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	84287d5c8ebe4       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   70bcb1cfc17cb       hello-world-app-55bf9c44b4-n76qr
	62a8b0a39d4c5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          About a minute ago       Running             busybox                   0                   03e8b76cf2ca6       busybox
	d1ae91ba20f39       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago            Running             nginx                     0                   7c2a31b9c5f25       nginx
	c628af59171a6       registry.k8s.io/ingress-nginx/controller@sha256:22f9d129ae8c89a2cabbd13af3c1668944f3dd68fec186199b7024a0a2fc75b3             12 minutes ago           Running             controller                0                   5269f98841136       ingress-nginx-controller-bc57996ff-vspxr
	46b497acff809       420193b27261a8d37b9fb1faeed45094cefa47e72a7538fd5a6c05e8b5ce192e                                                             13 minutes ago           Exited              patch                     2                   9b56dfde07b05       ingress-nginx-admission-patch-djgkm
	e514372bb7c61       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   13 minutes ago           Exited              create                    0                   701673e6b651e       ingress-nginx-admission-create-v956v
	e91d3f678a224       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             13 minutes ago           Running             minikube-ingress-dns      0                   0f843618adee2       kube-ingress-dns-minikube
	57663e10f8fb7       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        14 minutes ago           Running             metrics-server            0                   6e671a3f83f2a       metrics-server-84c5f94fbc-4hhst
	0d15cf46332c6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             14 minutes ago           Running             storage-provisioner       0                   f82a1d1b6fc23       storage-provisioner
	18fa390b6a898       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             14 minutes ago           Running             coredns                   0                   b1697c3bbe9ac       coredns-7c65d6cfc9-l72ll
	c090785615f89       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                             14 minutes ago           Running             kube-proxy                0                   d61de0f8c591f       kube-proxy-hrkf9
	11c9fccd22a80       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                             14 minutes ago           Running             kindnet-cni               0                   2bce0fd8db1f3       kindnet-7tqxs
	170502ec13419       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                             14 minutes ago           Running             kube-scheduler            0                   9b67e90846ea3       kube-scheduler-addons-561541
	94872964dd248       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                             14 minutes ago           Running             kube-apiserver            0                   5cc52b5d8789f       kube-apiserver-addons-561541
	6ae364e85e983       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                             14 minutes ago           Running             kube-controller-manager   0                   5351cdc98634d       kube-controller-manager-addons-561541
	ce90142154888       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             14 minutes ago           Running             etcd                      0                   6c867bd953225       etcd-addons-561541
	
	
	==> coredns [18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575] <==
	[INFO] 10.244.0.13:43278 - 44932 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.003847946s
	[INFO] 10.244.0.13:43278 - 56016 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.0001599s
	[INFO] 10.244.0.13:43278 - 16388 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000137459s
	[INFO] 10.244.0.13:56754 - 23534 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000106329s
	[INFO] 10.244.0.13:56754 - 23279 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000211255s
	[INFO] 10.244.0.13:42097 - 12259 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000069653s
	[INFO] 10.244.0.13:42097 - 12448 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114937s
	[INFO] 10.244.0.13:41287 - 912 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060069s
	[INFO] 10.244.0.13:41287 - 683 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000047646s
	[INFO] 10.244.0.13:45029 - 37289 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001270527s
	[INFO] 10.244.0.13:45029 - 37462 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001258113s
	[INFO] 10.244.0.13:54674 - 39690 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000080442s
	[INFO] 10.244.0.13:54674 - 40093 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000139396s
	[INFO] 10.244.0.21:41937 - 41761 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000216506s
	[INFO] 10.244.0.21:36324 - 61840 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000178665s
	[INFO] 10.244.0.21:46716 - 37281 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000169262s
	[INFO] 10.244.0.21:57938 - 14840 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00017672s
	[INFO] 10.244.0.21:40063 - 19727 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000142759s
	[INFO] 10.244.0.21:36020 - 14030 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000140905s
	[INFO] 10.244.0.21:45692 - 36105 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002917715s
	[INFO] 10.244.0.21:51495 - 46171 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002332335s
	[INFO] 10.244.0.21:44321 - 49564 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000941751s
	[INFO] 10.244.0.21:36428 - 61664 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002463156s
	[INFO] 10.244.0.23:53685 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000220137s
	[INFO] 10.244.0.23:52286 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000137916s
	
	
	==> describe nodes <==
	Name:               addons-561541
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-561541
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=addons-561541
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T02_48_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-561541
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 02:48:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-561541
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:03:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:02:56 +0000   Fri, 04 Oct 2024 02:48:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:02:56 +0000   Fri, 04 Oct 2024 02:48:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:02:56 +0000   Fri, 04 Oct 2024 02:48:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:02:56 +0000   Fri, 04 Oct 2024 02:49:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-561541
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 59dba9086f574e6484a9ea8720d3047f
	  System UUID:                1842af5d-2609-4c57-90a7-654220d497e5
	  Boot ID:                    cc975b9c-d4f7-443e-a63b-68cdfd7ad286
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     hello-world-app-55bf9c44b4-n76qr            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-vspxr    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         14m
	  kube-system                 coredns-7c65d6cfc9-l72ll                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-addons-561541                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-7tqxs                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-addons-561541                250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-561541       200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-hrkf9                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-561541                100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-84c5f94fbc-4hhst             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         14m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             510Mi (6%)   220Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 14m                kube-proxy       
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node addons-561541 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node addons-561541 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node addons-561541 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  14m                kubelet          Node addons-561541 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                kubelet          Node addons-561541 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                kubelet          Node addons-561541 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                node-controller  Node addons-561541 event: Registered Node addons-561541 in Controller
	  Normal   NodeReady                14m                kubelet          Node addons-561541 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 4 02:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015570] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.529270] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.049348] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.015318] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.608453] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.834894] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab] <==
	{"level":"warn","ts":"2024-10-04T02:48:57.015509Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.78329ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-561541\" ","response":"range_response_count:1 size:5745"}
	{"level":"info","ts":"2024-10-04T02:48:57.015554Z","caller":"traceutil/trace.go:171","msg":"trace[242393772] range","detail":"{range_begin:/registry/minions/addons-561541; range_end:; response_count:1; response_revision:363; }","duration":"289.851269ms","start":"2024-10-04T02:48:56.725695Z","end":"2024-10-04T02:48:57.015546Z","steps":["trace[242393772] 'agreement among raft nodes before linearized reading'  (duration: 289.737408ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T02:48:57.015805Z","caller":"traceutil/trace.go:171","msg":"trace[116082131] transaction","detail":"{read_only:false; response_revision:363; number_of_response:1; }","duration":"266.735571ms","start":"2024-10-04T02:48:56.749061Z","end":"2024-10-04T02:48:57.015796Z","steps":["trace[116082131] 'process raft request'  (duration: 263.823683ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T02:48:57.035221Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.494624ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T02:48:57.035277Z","caller":"traceutil/trace.go:171","msg":"trace[1322537289] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:363; }","duration":"280.559526ms","start":"2024-10-04T02:48:56.754704Z","end":"2024-10-04T02:48:57.035264Z","steps":["trace[1322537289] 'agreement among raft nodes before linearized reading'  (duration: 280.47014ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T02:48:57.077564Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"415.267408ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T02:48:57.081904Z","caller":"traceutil/trace.go:171","msg":"trace[1993550185] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:363; }","duration":"419.629538ms","start":"2024-10-04T02:48:56.662258Z","end":"2024-10-04T02:48:57.081888Z","steps":["trace[1993550185] 'agreement among raft nodes before linearized reading'  (duration: 415.227762ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T02:48:57.015417Z","caller":"traceutil/trace.go:171","msg":"trace[599140919] linearizableReadLoop","detail":"{readStateIndex:374; appliedIndex:371; }","duration":"229.568105ms","start":"2024-10-04T02:48:56.785823Z","end":"2024-10-04T02:48:57.015391Z","steps":["trace[599140919] 'read index received'  (duration: 112.512183ms)","trace[599140919] 'applied index is now lower than readState.Index'  (duration: 117.055217ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T02:48:57.082261Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"528.280312ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-addons-561541\" ","response":"range_response_count:1 size:7253"}
	{"level":"info","ts":"2024-10-04T02:48:57.082298Z","caller":"traceutil/trace.go:171","msg":"trace[1246009146] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-addons-561541; range_end:; response_count:1; response_revision:363; }","duration":"528.326318ms","start":"2024-10-04T02:48:56.553962Z","end":"2024-10-04T02:48:57.082289Z","steps":["trace[1246009146] 'agreement among raft nodes before linearized reading'  (duration: 505.355102ms)","trace[1246009146] 'range keys from in-memory index tree'  (duration: 22.883725ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T02:48:57.082323Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T02:48:56.553943Z","time spent":"528.373719ms","remote":"127.0.0.1:48012","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":1,"response size":7277,"request content":"key:\"/registry/pods/kube-system/kube-controller-manager-addons-561541\" "}
	{"level":"warn","ts":"2024-10-04T02:48:57.082617Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T02:48:56.662237Z","time spent":"419.717225ms","remote":"127.0.0.1:47856","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":0,"response size":29,"request content":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 "}
	{"level":"warn","ts":"2024-10-04T02:48:57.059431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"397.095512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T02:48:57.170877Z","caller":"traceutil/trace.go:171","msg":"trace[671757903] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:363; }","duration":"508.543403ms","start":"2024-10-04T02:48:56.662309Z","end":"2024-10-04T02:48:57.170852Z","steps":["trace[671757903] 'agreement among raft nodes before linearized reading'  (duration: 397.07632ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T02:48:57.171584Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T02:48:56.662296Z","time spent":"509.260671ms","remote":"127.0.0.1:48210","response type":"/etcdserverpb.KV/Range","request count":0,"request size":35,"response count":0,"response size":29,"request content":"key:\"/registry/storageclasses/standard\" "}
	{"level":"warn","ts":"2024-10-04T02:48:57.285863Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.369755ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T02:48:57.287128Z","caller":"traceutil/trace.go:171","msg":"trace[52719835] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:363; }","duration":"179.623388ms","start":"2024-10-04T02:48:57.107488Z","end":"2024-10-04T02:48:57.287111Z","steps":["trace[52719835] 'agreement among raft nodes before linearized reading'  (duration: 50.770164ms)","trace[52719835] 'range keys from in-memory index tree'  (duration: 127.456192ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T02:48:57.285828Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"731.634653ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T02:48:57.288049Z","caller":"traceutil/trace.go:171","msg":"trace[1368450268] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:363; }","duration":"733.864332ms","start":"2024-10-04T02:48:56.554173Z","end":"2024-10-04T02:48:57.288037Z","steps":["trace[1368450268] 'agreement among raft nodes before linearized reading'  (duration: 647.255534ms)","trace[1368450268] 'range keys from in-memory index tree'  (duration: 84.3691ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T02:48:57.288663Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T02:48:56.554132Z","time spent":"734.516298ms","remote":"127.0.0.1:48030","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":0,"response size":29,"request content":"key:\"/registry/serviceaccounts\" limit:1 "}
	{"level":"warn","ts":"2024-10-04T02:48:57.287286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.453443ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T02:48:57.289173Z","caller":"traceutil/trace.go:171","msg":"trace[628025940] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:363; }","duration":"103.346355ms","start":"2024-10-04T02:48:57.185817Z","end":"2024-10-04T02:48:57.289163Z","steps":["trace[628025940] 'range keys from in-memory index tree'  (duration: 101.286553ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T02:58:44.648713Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1550}
	{"level":"info","ts":"2024-10-04T02:58:44.681113Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1550,"took":"31.806076ms","hash":4174400111,"current-db-size-bytes":6389760,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":3305472,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-10-04T02:58:44.681164Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4174400111,"revision":1550,"compact-revision":-1}
	
	
	==> kernel <==
	 03:03:43 up 46 min,  0 users,  load average: 0.07, 0.29, 0.32
	Linux addons-561541 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e] <==
	I1004 03:01:39.608988       1 main.go:299] handling current node
	I1004 03:01:49.617889       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:01:49.617922       1 main.go:299] handling current node
	I1004 03:01:59.609195       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:01:59.609246       1 main.go:299] handling current node
	I1004 03:02:09.609305       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:02:09.609444       1 main.go:299] handling current node
	I1004 03:02:19.608973       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:02:19.609006       1 main.go:299] handling current node
	I1004 03:02:29.609148       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:02:29.609305       1 main.go:299] handling current node
	I1004 03:02:39.616482       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:02:39.616516       1 main.go:299] handling current node
	I1004 03:02:49.614573       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:02:49.614606       1 main.go:299] handling current node
	I1004 03:02:59.609675       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:02:59.609709       1 main.go:299] handling current node
	I1004 03:03:09.609375       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:03:09.609504       1 main.go:299] handling current node
	I1004 03:03:19.609573       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:03:19.609613       1 main.go:299] handling current node
	I1004 03:03:29.609215       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:03:29.609350       1 main.go:299] handling current node
	I1004 03:03:39.616078       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:03:39.616115       1 main.go:299] handling current node
	
	
	==> kube-apiserver [94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4] <==
	 > logger="UnhandledError"
	E1004 02:50:50.492702       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.7.145:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.7.145:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.7.145:443: connect: connection refused" logger="UnhandledError"
	E1004 02:50:50.499329       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.7.145:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.7.145:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.7.145:443: connect: connection refused" logger="UnhandledError"
	I1004 02:50:50.774461       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1004 03:00:06.852413       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1004 03:00:23.529727       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 03:00:23.529800       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 03:00:23.630409       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 03:00:23.630472       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 03:00:23.687414       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 03:00:23.687464       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 03:00:23.726639       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 03:00:23.726685       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 03:00:23.805982       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 03:00:23.807304       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1004 03:00:24.729523       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1004 03:00:24.806669       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1004 03:00:24.825919       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1004 03:00:38.457257       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.231.221"}
	E1004 03:00:41.376517       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1004 03:01:15.431320       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1004 03:01:16.468283       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1004 03:01:20.997346       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1004 03:01:21.325261       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.23.145"}
	I1004 03:03:41.328362       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.55.205"}
	
	
	==> kube-controller-manager [6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10] <==
	W1004 03:02:34.803532       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:02:34.803663       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:02:39.004646       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:02:39.004694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:02:42.441344       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:02:42.441386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:02:44.916507       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:02:44.916876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1004 03:02:56.417884       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-561541"
	W1004 03:03:05.541875       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:03:05.541916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:03:13.889678       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:03:13.889723       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:03:23.321028       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:03:23.321074       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:03:27.653258       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:03:27.653301       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1004 03:03:41.080151       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="33.094971ms"
	I1004 03:03:41.092295       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.085094ms"
	I1004 03:03:41.092389       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="44.831µs"
	I1004 03:03:41.123064       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="45.735µs"
	W1004 03:03:42.866141       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:03:42.866181       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1004 03:03:43.300818       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.737607ms"
	I1004 03:03:43.300893       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="32.738µs"
	
	
	==> kube-proxy [c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f] <==
	I1004 02:48:59.709398       1 server_linux.go:66] "Using iptables proxy"
	I1004 02:49:00.163183       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1004 02:49:00.165551       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 02:49:00.230608       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1004 02:49:00.230743       1 server_linux.go:169] "Using iptables Proxier"
	I1004 02:49:00.233031       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 02:49:00.233700       1 server.go:483] "Version info" version="v1.31.1"
	I1004 02:49:00.233785       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 02:49:00.235594       1 config.go:199] "Starting service config controller"
	I1004 02:49:00.235705       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 02:49:00.235778       1 config.go:105] "Starting endpoint slice config controller"
	I1004 02:49:00.235813       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 02:49:00.236798       1 config.go:328] "Starting node config controller"
	I1004 02:49:00.236911       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 02:49:00.341187       1 shared_informer.go:320] Caches are synced for node config
	I1004 02:49:00.341275       1 shared_informer.go:320] Caches are synced for service config
	I1004 02:49:00.341313       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab] <==
	W1004 02:48:47.685626       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 02:48:47.685638       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.685703       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 02:48:47.685714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.685756       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 02:48:47.685766       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.686056       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 02:48:47.686090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.686326       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 02:48:47.686347       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.686450       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 02:48:47.686484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.686532       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 02:48:47.686558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.686637       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 02:48:47.686655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.686729       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 02:48:47.686746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.686847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 02:48:47.686875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.686893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 02:48:47.686906       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.687812       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 02:48:47.687851       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1004 02:48:49.277430       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 03:02:10 addons-561541 kubelet[1504]: E1004 03:02:10.043701    1504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="bda0a8b9-d255-4083-9afe-f4de2a62ec0d"
	Oct 04 03:02:19 addons-561541 kubelet[1504]: E1004 03:02:19.281652    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010939281406587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589684,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:02:19 addons-561541 kubelet[1504]: E1004 03:02:19.281688    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010939281406587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589684,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:02:25 addons-561541 kubelet[1504]: I1004 03:02:25.043108    1504 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 04 03:02:28 addons-561541 kubelet[1504]: I1004 03:02:28.121832    1504 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 04 03:02:29 addons-561541 kubelet[1504]: E1004 03:02:29.284233    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010949283925393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:599009,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:02:29 addons-561541 kubelet[1504]: E1004 03:02:29.284296    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010949283925393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:599009,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:02:39 addons-561541 kubelet[1504]: E1004 03:02:39.287185    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010959286930023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:599009,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:02:39 addons-561541 kubelet[1504]: E1004 03:02:39.287220    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010959286930023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:599009,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:02:49 addons-561541 kubelet[1504]: E1004 03:02:49.290072    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010969289826524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:599009,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:02:49 addons-561541 kubelet[1504]: E1004 03:02:49.290111    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010969289826524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:599009,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:02:59 addons-561541 kubelet[1504]: E1004 03:02:59.292964    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010979292735851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:599009,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:02:59 addons-561541 kubelet[1504]: E1004 03:02:59.293000    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010979292735851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:599009,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:03:09 addons-561541 kubelet[1504]: E1004 03:03:09.295271    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010989295037452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:599009,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:03:09 addons-561541 kubelet[1504]: E1004 03:03:09.295308    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010989295037452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:599009,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:03:19 addons-561541 kubelet[1504]: E1004 03:03:19.297697    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010999297464986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:599009,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:03:19 addons-561541 kubelet[1504]: E1004 03:03:19.297734    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010999297464986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:599009,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:03:29 addons-561541 kubelet[1504]: E1004 03:03:29.300568    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011009300342547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:599009,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:03:29 addons-561541 kubelet[1504]: E1004 03:03:29.300610    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011009300342547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:599009,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:03:39 addons-561541 kubelet[1504]: E1004 03:03:39.303436    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011019303209229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:599009,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:03:39 addons-561541 kubelet[1504]: E1004 03:03:39.303471    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011019303209229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:599009,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:03:41 addons-561541 kubelet[1504]: I1004 03:03:41.081453    1504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=73.993422728 podStartE2EDuration="12m15.081437353s" podCreationTimestamp="2024-10-04 02:51:26 +0000 UTC" firstStartedPulling="2024-10-04 02:51:26.929427672 +0000 UTC m=+158.094621378" lastFinishedPulling="2024-10-04 03:02:28.017442297 +0000 UTC m=+819.182636003" observedRunningTime="2024-10-04 03:02:28.134621791 +0000 UTC m=+819.299815505" watchObservedRunningTime="2024-10-04 03:03:41.081437353 +0000 UTC m=+892.246631059"
	Oct 04 03:03:41 addons-561541 kubelet[1504]: E1004 03:03:41.081642    1504 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="75a7f403-12b6-4f98-b0af-8bf7c3aa0ab1" containerName="logviewer"
	Oct 04 03:03:41 addons-561541 kubelet[1504]: I1004 03:03:41.081679    1504 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a7f403-12b6-4f98-b0af-8bf7c3aa0ab1" containerName="logviewer"
	Oct 04 03:03:41 addons-561541 kubelet[1504]: I1004 03:03:41.155870    1504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnrtp\" (UniqueName: \"kubernetes.io/projected/b8f88554-e6b4-49c7-a48f-9a5770d29df2-kube-api-access-pnrtp\") pod \"hello-world-app-55bf9c44b4-n76qr\" (UID: \"b8f88554-e6b4-49c7-a48f-9a5770d29df2\") " pod="default/hello-world-app-55bf9c44b4-n76qr"
	
	
	==> storage-provisioner [0d15cf46332c64d3e7a662fc0b4577dc8d495d7d97618c2a5c069605014065da] <==
	I1004 02:49:41.198898       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 02:49:41.221190       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 02:49:41.221345       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 02:49:41.242699       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 02:49:41.245549       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"41bdd897-b9f7-4fc8-98f5-b9ea8304c00f", APIVersion:"v1", ResourceVersion:"936", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-561541_be83665d-4dd2-47e0-9163-d59677258681 became leader
	I1004 02:49:41.245836       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-561541_be83665d-4dd2-47e0-9163-d59677258681!
	I1004 02:49:41.346253       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-561541_be83665d-4dd2-47e0-9163-d59677258681!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-561541 -n addons-561541
helpers_test.go:261: (dbg) Run:  kubectl --context addons-561541 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-v956v ingress-nginx-admission-patch-djgkm
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-561541 describe pod ingress-nginx-admission-create-v956v ingress-nginx-admission-patch-djgkm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-561541 describe pod ingress-nginx-admission-create-v956v ingress-nginx-admission-patch-djgkm: exit status 1 (113.612192ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-v956v" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-djgkm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-561541 describe pod ingress-nginx-admission-create-v956v ingress-nginx-admission-patch-djgkm: exit status 1
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-561541 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-561541 addons disable ingress --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-arm64 -p addons-561541 addons disable ingress --alsologtostderr -v=1: (7.740580974s)
--- FAIL: TestAddons/parallel/Ingress (152.39s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (364.16s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:395: metrics-server stabilized in 3.557858ms
addons_test.go:397: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-4hhst" [7577c62c-151a-4a09-91f6-abd270367e65] Running
addons_test.go:397: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.021073567s
addons_test.go:403: (dbg) Run:  kubectl --context addons-561541 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-561541 top pods -n kube-system: exit status 1 (93.895633ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-l72ll, age: 12m12.022594071s

                                                
                                                
** /stderr **
I1004 03:01:07.025577    7560 retry.go:31] will retry after 2.660183141s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-561541 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-561541 top pods -n kube-system: exit status 1 (94.670199ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-l72ll, age: 12m14.777803655s

                                                
                                                
** /stderr **
I1004 03:01:09.780754    7560 retry.go:31] will retry after 5.719598542s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-561541 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-561541 top pods -n kube-system: exit status 1 (134.712082ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-l72ll, age: 12m20.627735486s

                                                
                                                
** /stderr **
I1004 03:01:15.635973    7560 retry.go:31] will retry after 9.454866035s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-561541 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-561541 top pods -n kube-system: exit status 1 (88.591234ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-l72ll, age: 12m30.176736634s

                                                
                                                
** /stderr **
I1004 03:01:25.179777    7560 retry.go:31] will retry after 11.476702567s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-561541 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-561541 top pods -n kube-system: exit status 1 (88.613166ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-l72ll, age: 12m41.745764215s

                                                
                                                
** /stderr **
I1004 03:01:36.748479    7560 retry.go:31] will retry after 18.001011264s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-561541 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-561541 top pods -n kube-system: exit status 1 (85.434419ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-l72ll, age: 12m59.832024539s

                                                
                                                
** /stderr **
I1004 03:01:54.835224    7560 retry.go:31] will retry after 11.805761121s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-561541 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-561541 top pods -n kube-system: exit status 1 (94.239294ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-l72ll, age: 13m11.733172394s

                                                
                                                
** /stderr **
I1004 03:02:06.736146    7560 retry.go:31] will retry after 25.525040609s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-561541 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-561541 top pods -n kube-system: exit status 1 (88.396737ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-l72ll, age: 13m37.346590633s

                                                
                                                
** /stderr **
I1004 03:02:32.349903    7560 retry.go:31] will retry after 47.475927298s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-561541 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-561541 top pods -n kube-system: exit status 1 (91.251997ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-l72ll, age: 14m24.914355879s

                                                
                                                
** /stderr **
I1004 03:03:19.917424    7560 retry.go:31] will retry after 37.444171389s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-561541 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-561541 top pods -n kube-system: exit status 1 (92.713531ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-l72ll, age: 15m2.451278319s

                                                
                                                
** /stderr **
I1004 03:03:57.454600    7560 retry.go:31] will retry after 1m10.448744435s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-561541 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-561541 top pods -n kube-system: exit status 1 (87.791456ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-l72ll, age: 16m12.988763453s

                                                
                                                
** /stderr **
I1004 03:05:07.992139    7560 retry.go:31] will retry after 1m0.919243503s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-561541 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-561541 top pods -n kube-system: exit status 1 (85.286062ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-l72ll, age: 17m13.99473028s

                                                
                                                
** /stderr **
I1004 03:06:08.997741    7560 retry.go:31] will retry after 53.96444052s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-561541 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-561541 top pods -n kube-system: exit status 1 (86.875446ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-l72ll, age: 18m8.049621186s

                                                
                                                
** /stderr **
addons_test.go:417: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-561541
helpers_test.go:235: (dbg) docker inspect addons-561541:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1a05bccdb598d1519bc2517f2b858714fa72d144b3c45751c0d7ba2ea4a94d63",
	        "Created": "2024-10-04T02:48:26.592380066Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8833,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-04T02:48:26.741857699Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/1a05bccdb598d1519bc2517f2b858714fa72d144b3c45751c0d7ba2ea4a94d63/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1a05bccdb598d1519bc2517f2b858714fa72d144b3c45751c0d7ba2ea4a94d63/hostname",
	        "HostsPath": "/var/lib/docker/containers/1a05bccdb598d1519bc2517f2b858714fa72d144b3c45751c0d7ba2ea4a94d63/hosts",
	        "LogPath": "/var/lib/docker/containers/1a05bccdb598d1519bc2517f2b858714fa72d144b3c45751c0d7ba2ea4a94d63/1a05bccdb598d1519bc2517f2b858714fa72d144b3c45751c0d7ba2ea4a94d63-json.log",
	        "Name": "/addons-561541",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-561541:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-561541",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cdbe499e5bb99b224e5a6a7bef44d8a42b419163309df824b5b164f76a7d5ba3-init/diff:/var/lib/docker/overlay2/113409e5ac8a20e78db05ebf8d2720874d391240a7f47648e5e10a2a0c89288f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cdbe499e5bb99b224e5a6a7bef44d8a42b419163309df824b5b164f76a7d5ba3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cdbe499e5bb99b224e5a6a7bef44d8a42b419163309df824b5b164f76a7d5ba3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cdbe499e5bb99b224e5a6a7bef44d8a42b419163309df824b5b164f76a7d5ba3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-561541",
	                "Source": "/var/lib/docker/volumes/addons-561541/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-561541",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-561541",
	                "name.minikube.sigs.k8s.io": "addons-561541",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "52b0e926b39a732ca349c0438ff36c69068bb9900ade82646dffd0cb2af9a447",
	            "SandboxKey": "/var/run/docker/netns/52b0e926b39a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-561541": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "9afb70ed7dfa63e157614e8e9c2bfa4a942ca170e167f5154862dbc2e3597630",
	                    "EndpointID": "1833c5b85db81f3386a924800b61d01e82129db0e3bf1c5cc01984e503d957b6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-561541",
	                        "1a05bccdb598"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-561541 -n addons-561541
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-561541 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-561541 logs -n 25: (1.377818395s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-973464                                                                   | download-docker-973464 | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:48 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-541238   | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC |                     |
	|         | binary-mirror-541238                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36901                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-541238                                                                     | binary-mirror-541238   | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:48 UTC |
	| addons  | enable dashboard -p                                                                         | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC |                     |
	|         | addons-561541                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC |                     |
	|         | addons-561541                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-561541 --wait=true                                                                | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:51 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=logviewer                                                                          |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-561541 addons disable                                                                | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 02:51 UTC | 04 Oct 24 02:51 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-561541 addons disable                                                                | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 02:59 UTC | 04 Oct 24 02:59 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-561541 ip                                                                            | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 02:59 UTC | 04 Oct 24 02:59 UTC |
	| addons  | addons-561541 addons disable                                                                | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 02:59 UTC | 04 Oct 24 02:59 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-561541 addons disable                                                                | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | -p addons-561541                                                                            |                        |         |         |                     |                     |
	| addons  | addons-561541 addons                                                                        | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-561541 addons                                                                        | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-561541 ssh cat                                                                       | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | /opt/local-path-provisioner/pvc-7e10a70c-e181-4d72-a74e-5076f85972f6_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-561541 addons disable                                                                | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:01 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-561541 addons                                                                        | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | -p addons-561541                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-561541 addons disable                                                                | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-561541 addons disable                                                                | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC | 04 Oct 24 03:01 UTC |
	|         | logviewer --alsologtostderr                                                                 |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-561541 addons                                                                        | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC | 04 Oct 24 03:01 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-561541 ssh curl -s                                                                   | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-561541 ip                                                                            | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:03 UTC | 04 Oct 24 03:03 UTC |
	| addons  | addons-561541 addons disable                                                                | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:03 UTC | 04 Oct 24 03:03 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-561541 addons disable                                                                | addons-561541          | jenkins | v1.34.0 | 04 Oct 24 03:03 UTC | 04 Oct 24 03:03 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 02:48:02
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 02:48:02.163776    8328 out.go:345] Setting OutFile to fd 1 ...
	I1004 02:48:02.163979    8328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:48:02.163993    8328 out.go:358] Setting ErrFile to fd 2...
	I1004 02:48:02.163999    8328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:48:02.164385    8328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-2238/.minikube/bin
	I1004 02:48:02.164859    8328 out.go:352] Setting JSON to false
	I1004 02:48:02.165601    8328 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1828,"bootTime":1728008255,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1004 02:48:02.165673    8328 start.go:139] virtualization:  
	I1004 02:48:02.168414    8328 out.go:177] * [addons-561541] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1004 02:48:02.169975    8328 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 02:48:02.170033    8328 notify.go:220] Checking for updates...
	I1004 02:48:02.173077    8328 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 02:48:02.174325    8328 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-2238/kubeconfig
	I1004 02:48:02.175444    8328 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-2238/.minikube
	I1004 02:48:02.176798    8328 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1004 02:48:02.177957    8328 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 02:48:02.179335    8328 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 02:48:02.200601    8328 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1004 02:48:02.200736    8328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 02:48:02.261920    8328 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-04 02:48:02.252897049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 02:48:02.262037    8328 docker.go:318] overlay module found
	I1004 02:48:02.263554    8328 out.go:177] * Using the docker driver based on user configuration
	I1004 02:48:02.265082    8328 start.go:297] selected driver: docker
	I1004 02:48:02.265098    8328 start.go:901] validating driver "docker" against <nil>
	I1004 02:48:02.265111    8328 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 02:48:02.265778    8328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 02:48:02.320232    8328 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-04 02:48:02.302813386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 02:48:02.320442    8328 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 02:48:02.320665    8328 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 02:48:02.322048    8328 out.go:177] * Using Docker driver with root privileges
	I1004 02:48:02.323354    8328 cni.go:84] Creating CNI manager for ""
	I1004 02:48:02.323420    8328 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1004 02:48:02.323435    8328 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1004 02:48:02.323507    8328 start.go:340] cluster config:
	{Name:addons-561541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-561541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 02:48:02.324871    8328 out.go:177] * Starting "addons-561541" primary control-plane node in "addons-561541" cluster
	I1004 02:48:02.326445    8328 cache.go:121] Beginning downloading kic base image for docker with crio
	I1004 02:48:02.328038    8328 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1004 02:48:02.329392    8328 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 02:48:02.329444    8328 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1004 02:48:02.329458    8328 cache.go:56] Caching tarball of preloaded images
	I1004 02:48:02.329485    8328 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1004 02:48:02.329548    8328 preload.go:172] Found /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1004 02:48:02.329559    8328 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 02:48:02.329903    8328 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/config.json ...
	I1004 02:48:02.329967    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/config.json: {Name:mk5d51ff6027cfca40f377ff0780690a0b7c7e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:02.343615    8328 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1004 02:48:02.343745    8328 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1004 02:48:02.343767    8328 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1004 02:48:02.343772    8328 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1004 02:48:02.343782    8328 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1004 02:48:02.343795    8328 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from local cache
	I1004 02:48:19.187044    8328 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from cached tarball
	I1004 02:48:19.187078    8328 cache.go:194] Successfully downloaded all kic artifacts
	I1004 02:48:19.187118    8328 start.go:360] acquireMachinesLock for addons-561541: {Name:mk28445b2742a1e7724f7048fe9efccb251276cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 02:48:19.187243    8328 start.go:364] duration metric: took 108.101µs to acquireMachinesLock for "addons-561541"
	I1004 02:48:19.187270    8328 start.go:93] Provisioning new machine with config: &{Name:addons-561541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-561541 Namespace:default APIServerHAVIP: APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 02:48:19.187349    8328 start.go:125] createHost starting for "" (driver="docker")
	I1004 02:48:19.194304    8328 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1004 02:48:19.194570    8328 start.go:159] libmachine.API.Create for "addons-561541" (driver="docker")
	I1004 02:48:19.194610    8328 client.go:168] LocalClient.Create starting
	I1004 02:48:19.194732    8328 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem
	I1004 02:48:19.722917    8328 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem
	I1004 02:48:20.262959    8328 cli_runner.go:164] Run: docker network inspect addons-561541 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1004 02:48:20.278663    8328 cli_runner.go:211] docker network inspect addons-561541 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1004 02:48:20.278757    8328 network_create.go:284] running [docker network inspect addons-561541] to gather additional debugging logs...
	I1004 02:48:20.278778    8328 cli_runner.go:164] Run: docker network inspect addons-561541
	W1004 02:48:20.299351    8328 cli_runner.go:211] docker network inspect addons-561541 returned with exit code 1
	I1004 02:48:20.299389    8328 network_create.go:287] error running [docker network inspect addons-561541]: docker network inspect addons-561541: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-561541 not found
	I1004 02:48:20.299402    8328 network_create.go:289] output of [docker network inspect addons-561541]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-561541 not found
	
	** /stderr **
	I1004 02:48:20.299499    8328 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1004 02:48:20.314626    8328 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001805ee0}
	I1004 02:48:20.314670    8328 network_create.go:124] attempt to create docker network addons-561541 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1004 02:48:20.314727    8328 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-561541 addons-561541
	I1004 02:48:20.387495    8328 network_create.go:108] docker network addons-561541 192.168.49.0/24 created
	I1004 02:48:20.387530    8328 kic.go:121] calculated static IP "192.168.49.2" for the "addons-561541" container
	I1004 02:48:20.387612    8328 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1004 02:48:20.402489    8328 cli_runner.go:164] Run: docker volume create addons-561541 --label name.minikube.sigs.k8s.io=addons-561541 --label created_by.minikube.sigs.k8s.io=true
	I1004 02:48:20.419041    8328 oci.go:103] Successfully created a docker volume addons-561541
	I1004 02:48:20.419132    8328 cli_runner.go:164] Run: docker run --rm --name addons-561541-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-561541 --entrypoint /usr/bin/test -v addons-561541:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib
	I1004 02:48:22.495377    8328 cli_runner.go:217] Completed: docker run --rm --name addons-561541-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-561541 --entrypoint /usr/bin/test -v addons-561541:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib: (2.076203892s)
	I1004 02:48:22.495406    8328 oci.go:107] Successfully prepared a docker volume addons-561541
	I1004 02:48:22.495429    8328 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 02:48:22.495447    8328 kic.go:194] Starting extracting preloaded images to volume ...
	I1004 02:48:22.495514    8328 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-561541:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir
	I1004 02:48:26.520187    8328 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-561541:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir: (4.024631682s)
	I1004 02:48:26.520227    8328 kic.go:203] duration metric: took 4.024777009s to extract preloaded images to volume ...
	W1004 02:48:26.520375    8328 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1004 02:48:26.520510    8328 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1004 02:48:26.578676    8328 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-561541 --name addons-561541 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-561541 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-561541 --network addons-561541 --ip 192.168.49.2 --volume addons-561541:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122
	I1004 02:48:26.906403    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Running}}
	I1004 02:48:26.925383    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:26.954872    8328 cli_runner.go:164] Run: docker exec addons-561541 stat /var/lib/dpkg/alternatives/iptables
	I1004 02:48:27.024973    8328 oci.go:144] the created container "addons-561541" has a running status.
	I1004 02:48:27.025003    8328 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa...
	I1004 02:48:27.507752    8328 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1004 02:48:27.536001    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:27.564609    8328 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1004 02:48:27.564639    8328 kic_runner.go:114] Args: [docker exec --privileged addons-561541 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1004 02:48:27.632721    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:27.652065    8328 machine.go:93] provisionDockerMachine start ...
	I1004 02:48:27.652172    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:27.677592    8328 main.go:141] libmachine: Using SSH client type: native
	I1004 02:48:27.677857    8328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1004 02:48:27.677872    8328 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 02:48:27.828909    8328 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-561541
	
	I1004 02:48:27.828935    8328 ubuntu.go:169] provisioning hostname "addons-561541"
	I1004 02:48:27.829023    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:27.852868    8328 main.go:141] libmachine: Using SSH client type: native
	I1004 02:48:27.853109    8328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1004 02:48:27.853126    8328 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-561541 && echo "addons-561541" | sudo tee /etc/hostname
	I1004 02:48:28.009681    8328 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-561541
	
	I1004 02:48:28.009815    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:28.029190    8328 main.go:141] libmachine: Using SSH client type: native
	I1004 02:48:28.029544    8328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1004 02:48:28.029569    8328 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-561541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-561541/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-561541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 02:48:28.165042    8328 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 02:48:28.165069    8328 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19546-2238/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-2238/.minikube}
	I1004 02:48:28.165102    8328 ubuntu.go:177] setting up certificates
	I1004 02:48:28.165112    8328 provision.go:84] configureAuth start
	I1004 02:48:28.165178    8328 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-561541
	I1004 02:48:28.182797    8328 provision.go:143] copyHostCerts
	I1004 02:48:28.182883    8328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/key.pem (1679 bytes)
	I1004 02:48:28.183003    8328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/ca.pem (1082 bytes)
	I1004 02:48:28.183083    8328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/cert.pem (1123 bytes)
	I1004 02:48:28.183138    8328 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem org=jenkins.addons-561541 san=[127.0.0.1 192.168.49.2 addons-561541 localhost minikube]
	I1004 02:48:28.508841    8328 provision.go:177] copyRemoteCerts
	I1004 02:48:28.508932    8328 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 02:48:28.508983    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:28.525627    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:28.625563    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 02:48:28.648696    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1004 02:48:28.671918    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 02:48:28.694810    8328 provision.go:87] duration metric: took 529.671211ms to configureAuth
	I1004 02:48:28.694837    8328 ubuntu.go:193] setting minikube options for container-runtime
	I1004 02:48:28.695050    8328 config.go:182] Loaded profile config "addons-561541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 02:48:28.695157    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:28.711922    8328 main.go:141] libmachine: Using SSH client type: native
	I1004 02:48:28.712204    8328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1004 02:48:28.712226    8328 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 02:48:28.940754    8328 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 02:48:28.940822    8328 machine.go:96] duration metric: took 1.288733565s to provisionDockerMachine
	I1004 02:48:28.940846    8328 client.go:171] duration metric: took 9.746224774s to LocalClient.Create
	I1004 02:48:28.940899    8328 start.go:167] duration metric: took 9.746311502s to libmachine.API.Create "addons-561541"
	I1004 02:48:28.940924    8328 start.go:293] postStartSetup for "addons-561541" (driver="docker")
	I1004 02:48:28.940950    8328 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 02:48:28.941086    8328 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 02:48:28.941163    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:28.958400    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:29.054204    8328 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 02:48:29.057308    8328 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1004 02:48:29.057353    8328 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1004 02:48:29.057365    8328 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1004 02:48:29.057378    8328 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1004 02:48:29.057397    8328 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-2238/.minikube/addons for local assets ...
	I1004 02:48:29.057476    8328 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-2238/.minikube/files for local assets ...
	I1004 02:48:29.057506    8328 start.go:296] duration metric: took 116.563771ms for postStartSetup
	I1004 02:48:29.057879    8328 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-561541
	I1004 02:48:29.073843    8328 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/config.json ...
	I1004 02:48:29.074139    8328 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 02:48:29.074189    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:29.090614    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:29.181670    8328 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1004 02:48:29.185928    8328 start.go:128] duration metric: took 9.998563394s to createHost
	I1004 02:48:29.185997    8328 start.go:83] releasing machines lock for "addons-561541", held for 9.99874347s
	I1004 02:48:29.186080    8328 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-561541
	I1004 02:48:29.202215    8328 ssh_runner.go:195] Run: cat /version.json
	I1004 02:48:29.202265    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:29.202273    8328 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 02:48:29.202343    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:29.223326    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:29.223493    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:29.454866    8328 ssh_runner.go:195] Run: systemctl --version
	I1004 02:48:29.459013    8328 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 02:48:29.598778    8328 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1004 02:48:29.602816    8328 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 02:48:29.621156    8328 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1004 02:48:29.621279    8328 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 02:48:29.649362    8328 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1004 02:48:29.649384    8328 start.go:495] detecting cgroup driver to use...
	I1004 02:48:29.649428    8328 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1004 02:48:29.649496    8328 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 02:48:29.664664    8328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 02:48:29.675126    8328 docker.go:217] disabling cri-docker service (if available) ...
	I1004 02:48:29.675213    8328 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 02:48:29.688214    8328 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 02:48:29.702371    8328 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 02:48:29.782447    8328 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 02:48:29.884035    8328 docker.go:233] disabling docker service ...
	I1004 02:48:29.884133    8328 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 02:48:29.903634    8328 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 02:48:29.915810    8328 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 02:48:30.005243    8328 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 02:48:30.108036    8328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 02:48:30.120992    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 02:48:30.140144    8328 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 02:48:30.140251    8328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:48:30.151213    8328 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 02:48:30.151334    8328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:48:30.162049    8328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:48:30.172548    8328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:48:30.182879    8328 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 02:48:30.192518    8328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:48:30.202556    8328 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:48:30.218816    8328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:48:30.228798    8328 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 02:48:30.237642    8328 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 02:48:30.237708    8328 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 02:48:30.251373    8328 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 02:48:30.260317    8328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 02:48:30.338222    8328 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 02:48:30.444495    8328 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 02:48:30.444598    8328 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 02:48:30.448249    8328 start.go:563] Will wait 60s for crictl version
	I1004 02:48:30.448367    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:48:30.451773    8328 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 02:48:30.496757    8328 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1004 02:48:30.496962    8328 ssh_runner.go:195] Run: crio --version
	I1004 02:48:30.535478    8328 ssh_runner.go:195] Run: crio --version
	I1004 02:48:30.575596    8328 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1004 02:48:30.577393    8328 cli_runner.go:164] Run: docker network inspect addons-561541 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1004 02:48:30.592973    8328 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1004 02:48:30.596544    8328 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 02:48:30.607213    8328 kubeadm.go:883] updating cluster {Name:addons-561541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-561541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 02:48:30.607336    8328 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 02:48:30.607389    8328 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 02:48:30.680284    8328 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 02:48:30.680308    8328 crio.go:433] Images already preloaded, skipping extraction
	I1004 02:48:30.680369    8328 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 02:48:30.716735    8328 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 02:48:30.716761    8328 cache_images.go:84] Images are preloaded, skipping loading
	I1004 02:48:30.716770    8328 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I1004 02:48:30.716857    8328 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-561541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-561541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 02:48:30.716937    8328 ssh_runner.go:195] Run: crio config
	I1004 02:48:30.770881    8328 cni.go:84] Creating CNI manager for ""
	I1004 02:48:30.770903    8328 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1004 02:48:30.770913    8328 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 02:48:30.770941    8328 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-561541 NodeName:addons-561541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 02:48:30.771100    8328 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-561541"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 02:48:30.771180    8328 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 02:48:30.779940    8328 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 02:48:30.780022    8328 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 02:48:30.788511    8328 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1004 02:48:30.805874    8328 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 02:48:30.823701    8328 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I1004 02:48:30.841418    8328 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1004 02:48:30.844797    8328 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 02:48:30.856005    8328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 02:48:30.934755    8328 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 02:48:30.949008    8328 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541 for IP: 192.168.49.2
	I1004 02:48:30.949033    8328 certs.go:194] generating shared ca certs ...
	I1004 02:48:30.949050    8328 certs.go:226] acquiring lock for ca certs: {Name:mk468b07ab6620fd74cefc3667e1a8643008ce5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:30.949173    8328 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key
	I1004 02:48:31.188600    8328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt ...
	I1004 02:48:31.188632    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt: {Name:mk85bb8ad320af02292bb5af5763b5687fc2c71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:31.188832    8328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key ...
	I1004 02:48:31.188845    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key: {Name:mkf88c660188079b3d7cb04d43c22f4d16f00ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:31.188940    8328 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key
	I1004 02:48:31.432161    8328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.crt ...
	I1004 02:48:31.432192    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.crt: {Name:mkcbaf945ec67de02c8c92440fa4864dff75ef93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:31.432414    8328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key ...
	I1004 02:48:31.432429    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key: {Name:mk64a6b53b8d8913780411f2edaf4bbe5b2e2be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:31.432520    8328 certs.go:256] generating profile certs ...
	I1004 02:48:31.432579    8328 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.key
	I1004 02:48:31.432596    8328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt with IP's: []
	I1004 02:48:32.274868    8328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt ...
	I1004 02:48:32.274905    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: {Name:mk77a3876305a8cb8211156243bd37074c11c7eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:32.275109    8328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.key ...
	I1004 02:48:32.275121    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.key: {Name:mkad3a95dd66c000812d4bac0a0c5f17f6bccd6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:32.275208    8328 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.key.32857c0e
	I1004 02:48:32.275228    8328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.crt.32857c0e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1004 02:48:32.826126    8328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.crt.32857c0e ...
	I1004 02:48:32.826157    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.crt.32857c0e: {Name:mk09838633f5dc8e87cc56a6ac4328b525754f19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:32.826379    8328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.key.32857c0e ...
	I1004 02:48:32.826393    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.key.32857c0e: {Name:mk7085a89dcc6b89aa5d083374666fc9f9a6ebfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:32.826495    8328 certs.go:381] copying /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.crt.32857c0e -> /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.crt
	I1004 02:48:32.826583    8328 certs.go:385] copying /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.key.32857c0e -> /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.key
	I1004 02:48:32.826650    8328 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/proxy-client.key
	I1004 02:48:32.826669    8328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/proxy-client.crt with IP's: []
	I1004 02:48:33.004281    8328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/proxy-client.crt ...
	I1004 02:48:33.004311    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/proxy-client.crt: {Name:mk0273fcd7c5e6aa98ec0921888aa45a3c335bcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:33.004504    8328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/proxy-client.key ...
	I1004 02:48:33.004517    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/proxy-client.key: {Name:mkbb6adcbfb968918aa7d55d4a3c911d213bc33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:33.004709    8328 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 02:48:33.004748    8328 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem (1082 bytes)
	I1004 02:48:33.004791    8328 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem (1123 bytes)
	I1004 02:48:33.004821    8328 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem (1679 bytes)
	I1004 02:48:33.005446    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 02:48:33.031137    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 02:48:33.056300    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 02:48:33.080349    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 02:48:33.104544    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1004 02:48:33.129757    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 02:48:33.153524    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 02:48:33.177465    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 02:48:33.201240    8328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 02:48:33.233724    8328 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 02:48:33.250882    8328 ssh_runner.go:195] Run: openssl version
	I1004 02:48:33.256372    8328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 02:48:33.267260    8328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:48:33.270699    8328 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:48 /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:48:33.270779    8328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:48:33.277391    8328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 02:48:33.286832    8328 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 02:48:33.290066    8328 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 02:48:33.290112    8328 kubeadm.go:392] StartCluster: {Name:addons-561541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-561541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 02:48:33.290190    8328 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 02:48:33.290248    8328 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 02:48:33.325537    8328 cri.go:89] found id: ""
	I1004 02:48:33.325654    8328 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 02:48:33.334174    8328 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 02:48:33.342726    8328 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1004 02:48:33.342828    8328 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 02:48:33.351242    8328 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 02:48:33.351261    8328 kubeadm.go:157] found existing configuration files:
	
	I1004 02:48:33.351308    8328 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 02:48:33.359715    8328 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 02:48:33.359793    8328 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 02:48:33.369159    8328 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 02:48:33.378101    8328 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 02:48:33.378190    8328 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 02:48:33.386679    8328 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 02:48:33.395688    8328 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 02:48:33.395754    8328 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 02:48:33.404303    8328 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 02:48:33.412666    8328 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 02:48:33.412755    8328 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 02:48:33.420849    8328 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1004 02:48:33.459536    8328 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1004 02:48:33.459655    8328 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 02:48:33.479195    8328 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1004 02:48:33.479287    8328 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1004 02:48:33.479341    8328 kubeadm.go:310] OS: Linux
	I1004 02:48:33.479403    8328 kubeadm.go:310] CGROUPS_CPU: enabled
	I1004 02:48:33.479471    8328 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1004 02:48:33.479535    8328 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1004 02:48:33.479601    8328 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1004 02:48:33.479668    8328 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1004 02:48:33.479765    8328 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1004 02:48:33.479845    8328 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1004 02:48:33.479913    8328 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1004 02:48:33.479978    8328 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1004 02:48:33.538372    8328 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 02:48:33.538512    8328 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 02:48:33.538605    8328 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1004 02:48:33.547733    8328 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 02:48:33.550706    8328 out.go:235]   - Generating certificates and keys ...
	I1004 02:48:33.550813    8328 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 02:48:33.550894    8328 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 02:48:34.116405    8328 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 02:48:34.992261    8328 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1004 02:48:35.858654    8328 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1004 02:48:36.525746    8328 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1004 02:48:36.873613    8328 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1004 02:48:36.873759    8328 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-561541 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1004 02:48:38.109795    8328 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1004 02:48:38.110112    8328 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-561541 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1004 02:48:38.275211    8328 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 02:48:39.064025    8328 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 02:48:39.444378    8328 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1004 02:48:39.444574    8328 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 02:48:39.710488    8328 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 02:48:40.070453    8328 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1004 02:48:40.302074    8328 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 02:48:41.044572    8328 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 02:48:41.573391    8328 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 02:48:41.574157    8328 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 02:48:41.577402    8328 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 02:48:41.578974    8328 out.go:235]   - Booting up control plane ...
	I1004 02:48:41.579076    8328 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 02:48:41.579160    8328 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 02:48:41.580167    8328 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 02:48:41.590213    8328 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 02:48:41.596069    8328 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 02:48:41.596129    8328 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 02:48:41.687080    8328 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1004 02:48:41.687202    8328 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1004 02:48:42.703341    8328 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.016535631s
	I1004 02:48:42.703488    8328 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1004 02:48:48.205075    8328 kubeadm.go:310] [api-check] The API server is healthy after 5.501721486s
	I1004 02:48:48.224911    8328 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 02:48:48.238228    8328 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 02:48:48.263116    8328 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 02:48:48.263312    8328 kubeadm.go:310] [mark-control-plane] Marking the node addons-561541 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 02:48:48.274266    8328 kubeadm.go:310] [bootstrap-token] Using token: 2237cm.h1kig5501t3tmep9
	I1004 02:48:48.277001    8328 out.go:235]   - Configuring RBAC rules ...
	I1004 02:48:48.277134    8328 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 02:48:48.281449    8328 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 02:48:48.289446    8328 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 02:48:48.293667    8328 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 02:48:48.297547    8328 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 02:48:48.302512    8328 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 02:48:48.612029    8328 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 02:48:49.070634    8328 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1004 02:48:49.611902    8328 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1004 02:48:49.613766    8328 kubeadm.go:310] 
	I1004 02:48:49.613847    8328 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1004 02:48:49.613860    8328 kubeadm.go:310] 
	I1004 02:48:49.613938    8328 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1004 02:48:49.613947    8328 kubeadm.go:310] 
	I1004 02:48:49.613973    8328 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1004 02:48:49.614035    8328 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 02:48:49.614088    8328 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 02:48:49.614097    8328 kubeadm.go:310] 
	I1004 02:48:49.614150    8328 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1004 02:48:49.614157    8328 kubeadm.go:310] 
	I1004 02:48:49.614205    8328 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 02:48:49.614212    8328 kubeadm.go:310] 
	I1004 02:48:49.614264    8328 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1004 02:48:49.614357    8328 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 02:48:49.614440    8328 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 02:48:49.614451    8328 kubeadm.go:310] 
	I1004 02:48:49.614534    8328 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 02:48:49.614613    8328 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1004 02:48:49.614623    8328 kubeadm.go:310] 
	I1004 02:48:49.614707    8328 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2237cm.h1kig5501t3tmep9 \
	I1004 02:48:49.614812    8328 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aca64f2211befde5878f407d8185a64dfef5cf14c4e1f31b88bf71c58b586df2 \
	I1004 02:48:49.614835    8328 kubeadm.go:310] 	--control-plane 
	I1004 02:48:49.614842    8328 kubeadm.go:310] 
	I1004 02:48:49.614927    8328 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1004 02:48:49.614934    8328 kubeadm.go:310] 
	I1004 02:48:49.615015    8328 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2237cm.h1kig5501t3tmep9 \
	I1004 02:48:49.615119    8328 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aca64f2211befde5878f407d8185a64dfef5cf14c4e1f31b88bf71c58b586df2 
	I1004 02:48:49.618636    8328 kubeadm.go:310] W1004 02:48:33.456263    1180 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 02:48:49.618934    8328 kubeadm.go:310] W1004 02:48:33.457096    1180 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 02:48:49.619148    8328 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1004 02:48:49.619255    8328 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 02:48:49.619273    8328 cni.go:84] Creating CNI manager for ""
	I1004 02:48:49.619282    8328 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1004 02:48:49.622033    8328 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1004 02:48:49.624607    8328 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1004 02:48:49.628276    8328 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1004 02:48:49.628296    8328 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1004 02:48:49.646956    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1004 02:48:49.922758    8328 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 02:48:49.922829    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:49.922894    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-561541 minikube.k8s.io/updated_at=2024_10_04T02_48_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=addons-561541 minikube.k8s.io/primary=true
	I1004 02:48:50.103387    8328 ops.go:34] apiserver oom_adj: -16
	I1004 02:48:50.103524    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:50.603716    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:51.104550    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:51.604455    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:52.104368    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:52.604327    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:53.103912    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:53.603568    8328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:53.689859    8328 kubeadm.go:1113] duration metric: took 3.767088472s to wait for elevateKubeSystemPrivileges
	I1004 02:48:53.689886    8328 kubeadm.go:394] duration metric: took 20.399777885s to StartCluster
	I1004 02:48:53.689902    8328 settings.go:142] acquiring lock: {Name:mk9c80036423f55b2143f3dcbc4f16f5b78f75ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:53.690020    8328 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-2238/kubeconfig
	I1004 02:48:53.690421    8328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/kubeconfig: {Name:mkd1a87175976669e9a14c51acaef20b883a2130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:53.690615    8328 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 02:48:53.690745    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 02:48:53.690973    8328 config.go:182] Loaded profile config "addons-561541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 02:48:53.691007    8328 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:true metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1004 02:48:53.691089    8328 addons.go:69] Setting yakd=true in profile "addons-561541"
	I1004 02:48:53.691106    8328 addons.go:234] Setting addon yakd=true in "addons-561541"
	I1004 02:48:53.691129    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.691613    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.691995    8328 addons.go:69] Setting metrics-server=true in profile "addons-561541"
	I1004 02:48:53.692017    8328 addons.go:234] Setting addon metrics-server=true in "addons-561541"
	I1004 02:48:53.692039    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.692444    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.695667    8328 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-561541"
	I1004 02:48:53.695739    8328 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-561541"
	I1004 02:48:53.695879    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.696066    8328 addons.go:69] Setting registry=true in profile "addons-561541"
	I1004 02:48:53.696145    8328 addons.go:234] Setting addon registry=true in "addons-561541"
	I1004 02:48:53.696175    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.696617    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.696737    8328 addons.go:69] Setting cloud-spanner=true in profile "addons-561541"
	I1004 02:48:53.696753    8328 addons.go:234] Setting addon cloud-spanner=true in "addons-561541"
	I1004 02:48:53.696772    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.697179    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.699681    8328 addons.go:69] Setting storage-provisioner=true in profile "addons-561541"
	I1004 02:48:53.699753    8328 addons.go:234] Setting addon storage-provisioner=true in "addons-561541"
	I1004 02:48:53.699815    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.700544    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.701680    8328 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-561541"
	I1004 02:48:53.701738    8328 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-561541"
	I1004 02:48:53.701771    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.702264    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.702874    8328 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-561541"
	I1004 02:48:53.702893    8328 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-561541"
	I1004 02:48:53.703174    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.707557    8328 addons.go:69] Setting volcano=true in profile "addons-561541"
	I1004 02:48:53.707594    8328 addons.go:234] Setting addon volcano=true in "addons-561541"
	I1004 02:48:53.707629    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.708122    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.711944    8328 addons.go:69] Setting default-storageclass=true in profile "addons-561541"
	I1004 02:48:53.711982    8328 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-561541"
	I1004 02:48:53.712358    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.725311    8328 addons.go:69] Setting volumesnapshots=true in profile "addons-561541"
	I1004 02:48:53.725348    8328 addons.go:234] Setting addon volumesnapshots=true in "addons-561541"
	I1004 02:48:53.725387    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.725908    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.726053    8328 addons.go:69] Setting gcp-auth=true in profile "addons-561541"
	I1004 02:48:53.726076    8328 mustload.go:65] Loading cluster: addons-561541
	I1004 02:48:53.726232    8328 config.go:182] Loaded profile config "addons-561541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 02:48:53.726448    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.745304    8328 out.go:177] * Verifying Kubernetes components...
	I1004 02:48:53.745403    8328 addons.go:69] Setting ingress=true in profile "addons-561541"
	I1004 02:48:53.745421    8328 addons.go:234] Setting addon ingress=true in "addons-561541"
	I1004 02:48:53.745462    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.748470    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.750417    8328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 02:48:53.751247    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.758840    8328 addons.go:69] Setting ingress-dns=true in profile "addons-561541"
	I1004 02:48:53.758890    8328 addons.go:234] Setting addon ingress-dns=true in "addons-561541"
	I1004 02:48:53.758933    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.759545    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.782570    8328 addons.go:69] Setting inspektor-gadget=true in profile "addons-561541"
	I1004 02:48:53.782607    8328 addons.go:234] Setting addon inspektor-gadget=true in "addons-561541"
	I1004 02:48:53.782643    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.783133    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.816110    8328 addons.go:69] Setting logviewer=true in profile "addons-561541"
	I1004 02:48:53.816142    8328 addons.go:234] Setting addon logviewer=true in "addons-561541"
	I1004 02:48:53.816180    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.816656    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.862158    8328 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1004 02:48:53.865253    8328 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1004 02:48:53.868009    8328 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1004 02:48:53.877088    8328 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 02:48:53.877496    8328 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1004 02:48:53.877519    8328 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1004 02:48:53.877592    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:53.890315    8328 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 02:48:53.890336    8328 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 02:48:53.890398    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:53.916305    8328 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-561541"
	I1004 02:48:53.922199    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:53.922802    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:53.965662    8328 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:48:53.965689    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 02:48:53.965762    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:53.986061    8328 out.go:177]   - Using image docker.io/registry:2.8.3
	I1004 02:48:53.997322    8328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1004 02:48:54.000968    8328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1004 02:48:54.001123    8328 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1004 02:48:54.001137    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1004 02:48:54.001229    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.005586    8328 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1004 02:48:54.016828    8328 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1004 02:48:54.017331    8328 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1004 02:48:54.017347    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1004 02:48:54.017406    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.018766    8328 addons.go:234] Setting addon default-storageclass=true in "addons-561541"
	I1004 02:48:54.018802    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:54.019333    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:48:54.026926    8328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1004 02:48:54.026946    8328 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1004 02:48:54.027008    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.029303    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:48:54.041373    8328 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1004 02:48:54.042160    8328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1004 02:48:54.042224    8328 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	W1004 02:48:54.042456    8328 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1004 02:48:54.057777    8328 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1004 02:48:54.062510    8328 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1004 02:48:54.062570    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1004 02:48:54.062657    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.062817    8328 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1004 02:48:54.067457    8328 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1004 02:48:54.067521    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1004 02:48:54.067617    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.087710    8328 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1004 02:48:54.088133    8328 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1004 02:48:54.088164    8328 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1004 02:48:54.088243    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.092003    8328 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1004 02:48:54.094319    8328 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1004 02:48:54.094377    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1004 02:48:54.094465    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.115672    8328 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1004 02:48:54.118054    8328 out.go:177]   - Using image docker.io/ivans3/minikube-log-viewer:v1
	I1004 02:48:54.120870    8328 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1004 02:48:54.121147    8328 addons.go:431] installing /etc/kubernetes/addons/logviewer-dp-and-svc.yaml
	I1004 02:48:54.121194    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/logviewer-dp-and-svc.yaml (2016 bytes)
	I1004 02:48:54.121306    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.126276    8328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1004 02:48:54.128839    8328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1004 02:48:54.137969    8328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1004 02:48:54.147371    8328 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1004 02:48:54.147392    8328 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1004 02:48:54.147457    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.159174    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.163315    8328 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1004 02:48:54.163806    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.164278    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.168581    8328 out.go:177]   - Using image docker.io/busybox:stable
	I1004 02:48:54.171155    8328 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1004 02:48:54.171177    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1004 02:48:54.171241    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.181758    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.193801    8328 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 02:48:54.193821    8328 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 02:48:54.193881    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:48:54.205175    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.213139    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.221785    8328 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 02:48:54.221965    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 02:48:54.261607    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.266588    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.277445    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.287482    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.293708    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	W1004 02:48:54.297941    8328 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1004 02:48:54.297972    8328 retry.go:31] will retry after 169.493237ms: ssh: handshake failed: EOF
	I1004 02:48:54.298352    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	W1004 02:48:54.301417    8328 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1004 02:48:54.301439    8328 retry.go:31] will retry after 206.930752ms: ssh: handshake failed: EOF
	I1004 02:48:54.303548    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.312106    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:48:54.352364    8328 node_ready.go:35] waiting up to 6m0s for node "addons-561541" to be "Ready" ...
	I1004 02:48:54.583149    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 02:48:54.610971    8328 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 02:48:54.611042    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1004 02:48:54.620831    8328 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1004 02:48:54.620909    8328 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1004 02:48:54.635348    8328 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1004 02:48:54.635427    8328 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1004 02:48:54.646078    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:48:54.663927    8328 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1004 02:48:54.663998    8328 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1004 02:48:54.664675    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1004 02:48:54.691626    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1004 02:48:54.701113    8328 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1004 02:48:54.701183    8328 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1004 02:48:54.715590    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1004 02:48:54.726337    8328 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1004 02:48:54.726411    8328 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1004 02:48:54.764341    8328 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1004 02:48:54.764415    8328 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1004 02:48:54.813183    8328 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1004 02:48:54.813452    8328 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1004 02:48:54.825259    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1004 02:48:54.829539    8328 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 02:48:54.829608    8328 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 02:48:54.831870    8328 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1004 02:48:54.831931    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1004 02:48:54.858527    8328 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1004 02:48:54.858602    8328 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1004 02:48:54.917674    8328 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1004 02:48:54.917746    8328 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1004 02:48:54.995360    8328 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1004 02:48:54.995435    8328 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1004 02:48:54.999766    8328 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1004 02:48:54.999838    8328 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1004 02:48:55.011551    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1004 02:48:55.018540    8328 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 02:48:55.018616    8328 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 02:48:55.040682    8328 addons.go:431] installing /etc/kubernetes/addons/logviewer-rbac.yaml
	I1004 02:48:55.040757    8328 ssh_runner.go:362] scp logviewer/logviewer-rbac.yaml --> /etc/kubernetes/addons/logviewer-rbac.yaml (1064 bytes)
	I1004 02:48:55.048253    8328 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1004 02:48:55.048331    8328 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1004 02:48:55.088569    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1004 02:48:55.124060    8328 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1004 02:48:55.124140    8328 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1004 02:48:55.163467    8328 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1004 02:48:55.163536    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1004 02:48:55.171385    8328 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1004 02:48:55.171460    8328 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1004 02:48:55.197440    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 02:48:55.215875    8328 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1004 02:48:55.215951    8328 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1004 02:48:55.223700    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/logviewer-dp-and-svc.yaml -f /etc/kubernetes/addons/logviewer-rbac.yaml
	I1004 02:48:55.290202    8328 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1004 02:48:55.290290    8328 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1004 02:48:55.291108    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1004 02:48:55.294341    8328 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1004 02:48:55.294393    8328 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1004 02:48:55.341494    8328 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1004 02:48:55.341565    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1004 02:48:55.447078    8328 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1004 02:48:55.447155    8328 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1004 02:48:55.477667    8328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1004 02:48:55.477737    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1004 02:48:55.481333    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1004 02:48:55.581178    8328 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1004 02:48:55.581282    8328 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1004 02:48:55.587724    8328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1004 02:48:55.587796    8328 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1004 02:48:55.667514    8328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1004 02:48:55.667584    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1004 02:48:55.675740    8328 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1004 02:48:55.675813    8328 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1004 02:48:55.730121    8328 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1004 02:48:55.730180    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1004 02:48:55.743901    8328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1004 02:48:55.743971    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1004 02:48:55.781229    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1004 02:48:55.790360    8328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1004 02:48:55.790434    8328 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1004 02:48:55.873535    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1004 02:48:56.530616    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:48:57.112288    8328 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.890298073s)
	I1004 02:48:57.112371    8328 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1004 02:48:57.750373    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.167118728s)
	I1004 02:48:57.903742    8328 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-561541" context rescaled to 1 replicas
	I1004 02:48:58.567137    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:48:59.233865    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.58770567s)
	I1004 02:49:00.512635    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.847890993s)
	I1004 02:49:00.512669    8328 addons.go:475] Verifying addon ingress=true in "addons-561541"
	I1004 02:49:00.512847    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.82115089s)
	I1004 02:49:00.512978    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.797329059s)
	I1004 02:49:00.513060    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.687741073s)
	I1004 02:49:00.513110    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.501488417s)
	I1004 02:49:00.513622    8328 addons.go:475] Verifying addon registry=true in "addons-561541"
	I1004 02:49:00.513154    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.424516771s)
	I1004 02:49:00.513241    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.315707328s)
	I1004 02:49:00.514727    8328 addons.go:475] Verifying addon metrics-server=true in "addons-561541"
	I1004 02:49:00.513271    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/logviewer-dp-and-svc.yaml -f /etc/kubernetes/addons/logviewer-rbac.yaml: (5.289513066s)
	I1004 02:49:00.513308    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.222150075s)
	I1004 02:49:00.513378    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.031967949s)
	W1004 02:49:00.515492    8328 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1004 02:49:00.515517    8328 retry.go:31] will retry after 133.493191ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1004 02:49:00.513436    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.732145135s)
	I1004 02:49:00.516162    8328 out.go:177] * Verifying ingress addon...
	I1004 02:49:00.518137    8328 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-561541 service yakd-dashboard -n yakd-dashboard
	
	I1004 02:49:00.518141    8328 out.go:177] * Verifying registry addon...
	I1004 02:49:00.521530    8328 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1004 02:49:00.521542    8328 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1004 02:49:00.547787    8328 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1004 02:49:00.547820    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:00.549021    8328 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1004 02:49:00.549044    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:00.649189    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1004 02:49:00.879709    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:00.999883    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.126254053s)
	I1004 02:49:00.999924    8328 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-561541"
	I1004 02:49:01.003117    8328 out.go:177] * Verifying csi-hostpath-driver addon...
	I1004 02:49:01.006679    8328 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1004 02:49:01.079290    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:01.080066    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:01.081699    8328 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1004 02:49:01.081723    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:01.529227    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:01.529694    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:01.530504    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:01.983264    8328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.333994147s)
	I1004 02:49:02.012062    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:02.034699    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:02.036200    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:02.517843    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:02.527982    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:02.529546    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:03.010440    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:03.025761    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:03.027528    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:03.356694    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:03.511966    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:03.613197    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:03.613724    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:04.011402    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:04.029091    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:04.030733    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:04.511737    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:04.515000    8328 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1004 02:49:04.515100    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:49:04.536194    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:49:04.622437    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:04.622767    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:04.652340    8328 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1004 02:49:04.669570    8328 addons.go:234] Setting addon gcp-auth=true in "addons-561541"
	I1004 02:49:04.669618    8328 host.go:66] Checking if "addons-561541" exists ...
	I1004 02:49:04.670076    8328 cli_runner.go:164] Run: docker container inspect addons-561541 --format={{.State.Status}}
	I1004 02:49:04.685936    8328 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1004 02:49:04.685991    8328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-561541
	I1004 02:49:04.720743    8328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/addons-561541/id_rsa Username:docker}
	I1004 02:49:04.847183    8328 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1004 02:49:04.850577    8328 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1004 02:49:04.853084    8328 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1004 02:49:04.853106    8328 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1004 02:49:04.887726    8328 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1004 02:49:04.887753    8328 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1004 02:49:04.910893    8328 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1004 02:49:04.910963    8328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1004 02:49:04.931574    8328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1004 02:49:05.014643    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:05.026749    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:05.027820    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:05.372402    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:05.531671    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:05.533353    8328 addons.go:475] Verifying addon gcp-auth=true in "addons-561541"
	I1004 02:49:05.538199    8328 out.go:177] * Verifying gcp-auth addon...
	I1004 02:49:05.541807    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:05.541928    8328 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1004 02:49:05.626783    8328 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1004 02:49:05.626808    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:05.627058    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:06.010037    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:06.026808    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:06.027619    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:06.046300    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:06.510804    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:06.525508    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:06.526349    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:06.545412    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:07.009954    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:07.025167    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:07.026419    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:07.046477    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:07.510734    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:07.525475    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:07.526507    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:07.545626    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:07.855323    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:08.010990    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:08.025928    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:08.026663    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:08.045565    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:08.511112    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:08.525393    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:08.526257    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:08.545186    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:09.010156    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:09.025819    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:09.026678    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:09.046423    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:09.510254    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:09.525715    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:09.526715    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:09.544852    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:09.855665    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:10.015970    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:10.027013    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:10.028056    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:10.045795    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:10.510522    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:10.524848    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:10.525964    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:10.545075    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:11.010739    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:11.025236    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:11.026514    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:11.046557    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:11.510263    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:11.525190    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:11.526203    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:11.545569    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:11.856129    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:12.009920    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:12.025713    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:12.026717    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:12.045009    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:12.510614    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:12.526163    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:12.526827    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:12.545130    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:13.010380    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:13.025839    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:13.026568    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:13.046332    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:13.510801    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:13.525595    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:13.526311    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:13.545237    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:14.010196    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:14.025766    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:14.026781    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:14.044980    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:14.356359    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:14.510075    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:14.526191    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:14.526191    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:14.544867    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:15.009998    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:15.034208    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:15.035493    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:15.047743    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:15.510199    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:15.525979    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:15.526742    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:15.545731    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:16.010598    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:16.025273    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:16.026280    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:16.045508    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:16.357045    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:16.509935    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:16.525435    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:16.526168    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:16.545285    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:17.010965    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:17.025728    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:17.027056    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:17.045407    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:17.510826    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:17.525774    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:17.526632    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:17.545682    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:18.009931    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:18.025975    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:18.026670    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:18.045113    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:18.511234    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:18.524862    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:18.526011    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:18.545415    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:18.856244    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:19.010527    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:19.025169    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:19.025781    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:19.045501    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:19.510470    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:19.525801    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:19.526344    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:19.545876    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:20.011427    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:20.026576    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:20.027715    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:20.045339    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:20.510476    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:20.525894    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:20.526556    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:20.544789    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:21.010478    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:21.026517    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:21.026712    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:21.046557    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:21.355486    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:21.510538    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:21.525130    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:21.525971    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:21.544906    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:22.010276    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:22.026443    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:22.027683    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:22.045372    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:22.510580    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:22.525464    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:22.526797    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:22.544978    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:23.010711    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:23.026806    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:23.027014    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:23.045825    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:23.356069    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:23.510659    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:23.525430    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:23.526399    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:23.545349    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:24.010215    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:24.025064    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:24.026090    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:24.046076    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:24.510883    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:24.525928    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:24.527281    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:24.545588    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:25.010414    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:25.026528    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:25.027131    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:25.046308    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:25.510642    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:25.526422    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:25.527130    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:25.545388    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:25.856293    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:26.010490    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:26.025660    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:26.026411    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:26.045694    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:26.509918    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:26.526201    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:26.526975    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:26.545787    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:27.010439    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:27.025934    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:27.026830    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:27.046014    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:27.510485    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:27.525790    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:27.526739    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:27.544871    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:28.010404    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:28.025938    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:28.027054    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:28.045239    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:28.355793    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:28.510966    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:28.526016    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:28.526943    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:28.545331    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:29.010463    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:29.025999    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:29.026262    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:29.045608    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:29.510310    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:29.525716    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:29.526440    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:29.545556    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:30.011411    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:30.037609    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:30.039215    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:30.046478    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:30.356522    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:30.510484    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:30.525761    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:30.526599    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:30.545627    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:31.010151    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:31.025763    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:31.028131    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:31.045507    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:31.510704    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:31.525681    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:31.526399    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:31.545558    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:32.010997    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:32.026269    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:32.027077    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:32.046364    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:32.510051    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:32.525140    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:32.526324    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:32.545371    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:32.855916    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:33.010328    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:33.025471    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:33.026133    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:33.045887    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:33.510407    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:33.525802    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:33.527114    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:33.545085    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:34.009843    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:34.025666    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:34.026897    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:34.045084    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:34.511059    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:34.525256    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:34.526043    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:34.545183    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:35.011280    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:35.025496    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:35.026360    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:35.046086    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:35.356404    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:35.511806    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:35.526080    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:35.526513    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:35.546020    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:36.010030    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:36.026382    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:36.026979    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:36.045656    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:36.509957    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:36.526020    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:36.526275    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:36.545100    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:37.011140    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:37.027109    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:37.027959    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:37.046024    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:37.510865    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:37.525927    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:37.526468    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:37.545227    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:37.856229    8328 node_ready.go:53] node "addons-561541" has status "Ready":"False"
	I1004 02:49:38.010588    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:38.025083    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:38.026153    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:38.045398    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:38.510926    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:38.525552    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:38.526411    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:38.545475    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:39.010742    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:39.025692    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:39.026616    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:39.047185    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:39.510892    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:39.526060    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:39.526747    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:39.545103    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:40.015512    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:40.037303    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:40.042268    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:40.123777    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:40.382608    8328 node_ready.go:49] node "addons-561541" has status "Ready":"True"
	I1004 02:49:40.382688    8328 node_ready.go:38] duration metric: took 46.030250639s for node "addons-561541" to be "Ready" ...
	I1004 02:49:40.382713    8328 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:49:40.411056    8328 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l72ll" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:40.564091    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:40.566225    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:40.567527    8328 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1004 02:49:40.567646    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:40.567775    8328 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1004 02:49:40.567795    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:41.026954    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:41.043416    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:41.044137    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:41.057464    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:41.418794    8328 pod_ready.go:93] pod "coredns-7c65d6cfc9-l72ll" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:41.418821    8328 pod_ready.go:82] duration metric: took 1.007677628s for pod "coredns-7c65d6cfc9-l72ll" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.418873    8328 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-561541" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.424590    8328 pod_ready.go:93] pod "etcd-addons-561541" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:41.424666    8328 pod_ready.go:82] duration metric: took 5.776713ms for pod "etcd-addons-561541" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.424685    8328 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-561541" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.430286    8328 pod_ready.go:93] pod "kube-apiserver-addons-561541" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:41.430310    8328 pod_ready.go:82] duration metric: took 5.615673ms for pod "kube-apiserver-addons-561541" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.430323    8328 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-561541" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.435657    8328 pod_ready.go:93] pod "kube-controller-manager-addons-561541" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:41.435684    8328 pod_ready.go:82] duration metric: took 5.351527ms for pod "kube-controller-manager-addons-561541" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.435707    8328 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hrkf9" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.516513    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:41.526649    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:41.527971    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:41.545114    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:41.557161    8328 pod_ready.go:93] pod "kube-proxy-hrkf9" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:41.557185    8328 pod_ready.go:82] duration metric: took 121.46867ms for pod "kube-proxy-hrkf9" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.557197    8328 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-561541" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.957390    8328 pod_ready.go:93] pod "kube-scheduler-addons-561541" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:41.957463    8328 pod_ready.go:82] duration metric: took 400.257187ms for pod "kube-scheduler-addons-561541" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.957494    8328 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:42.013216    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:42.030749    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:42.033979    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:42.046361    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:42.516015    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:42.528872    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:42.529311    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:42.548338    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:43.012228    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:43.029252    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:43.031428    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:43.048306    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:43.513374    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:43.528329    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:43.529330    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:43.545957    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:43.963614    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:44.014595    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:44.027763    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:44.028423    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:44.045435    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:44.511992    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:44.526213    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:44.527450    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:44.545988    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:45.012914    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:45.029795    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:45.031520    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:45.047816    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:45.511893    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:45.527616    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:45.528441    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:45.546107    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:45.965001    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:46.012282    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:46.029829    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:46.032437    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:46.046301    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:46.512667    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:46.529958    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:46.532771    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:46.546591    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:47.011226    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:47.027489    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:47.028463    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:47.045696    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:47.513065    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:47.528687    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:47.530187    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:47.545627    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:47.967283    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:48.013463    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:48.029916    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:48.030983    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:48.046552    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:48.513932    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:48.530385    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:48.533138    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:48.546201    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:49.011620    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:49.034232    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:49.035830    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:49.051166    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:49.513049    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:49.529279    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:49.531848    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:49.545973    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:50.023183    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:50.030314    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:50.032018    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:50.047274    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:50.465624    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:50.512486    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:50.528161    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:50.528852    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:50.545585    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:51.012170    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:51.027019    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:51.028228    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:51.046451    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:51.513302    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:51.527383    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:51.528889    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:51.545723    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:52.012776    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:52.027668    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:52.029019    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:52.045548    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:52.512246    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:52.526126    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:52.528222    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:52.545348    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:52.964362    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:53.012235    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:53.028072    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:53.030090    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:53.046046    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:53.514737    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:53.533037    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:53.533981    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:53.545529    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:54.011810    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:54.030174    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:54.031513    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:54.046924    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:54.511634    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:54.527201    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:54.528278    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:54.545874    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:54.964745    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:55.012158    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:55.028052    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:55.029695    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:55.046695    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:55.512977    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:55.526087    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:55.527361    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:55.546041    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:56.011883    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:56.025797    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:56.028531    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:56.045756    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:56.511897    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:56.525664    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:56.526307    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:56.545713    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:56.965963    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:57.012552    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:57.027768    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:57.028541    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:57.048011    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:57.512756    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:57.527895    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:57.528350    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:57.545410    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:58.012358    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:58.031770    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:58.033137    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:58.046211    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:58.512190    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:58.526470    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:58.527733    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:58.545899    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:59.011851    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:59.026790    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:59.028646    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:59.046032    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:59.471076    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:59.535361    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:59.542920    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:59.548064    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:59.556709    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:00.018518    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:00.029291    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:00.030801    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:00.052886    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:00.511829    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:00.527281    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:00.528522    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:00.546340    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:01.011827    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:01.032324    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:01.033615    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:01.051637    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:01.511851    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:01.533988    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:01.535320    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:01.545983    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:01.966238    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:02.012092    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:02.028097    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:02.029824    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:02.049079    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:02.512111    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:02.526621    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:02.526916    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:02.545844    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:03.011709    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:03.026388    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:03.027450    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:03.046663    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:03.521995    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:03.527796    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:03.529478    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:03.552946    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:03.973502    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:04.012581    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:04.027713    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:04.029311    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:04.045755    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:04.513690    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:04.529132    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:04.530128    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:04.612657    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:05.012110    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:05.026873    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:05.027496    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:05.045432    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:05.512066    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:05.526274    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:05.527269    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:05.545644    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:06.012469    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:06.028507    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:06.029899    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:06.046550    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:06.466818    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:06.512451    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:06.530120    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:06.532449    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:06.546602    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:07.012528    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:07.026183    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:07.026744    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:07.046173    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:07.512194    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:07.526261    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:07.527287    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:07.545732    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:08.012432    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:08.029605    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:08.035532    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:08.046656    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:08.512593    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:08.526621    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:08.528155    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:08.547423    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:08.969775    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:09.011729    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:09.027288    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:09.029037    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:09.045616    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:09.512350    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:09.533352    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:09.534730    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:09.612495    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:10.012719    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:10.030173    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:10.030758    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:10.045728    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:10.512121    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:10.526483    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:10.527719    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:10.546223    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:11.011944    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:11.025975    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:11.026460    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:11.046175    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:11.464194    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:11.511674    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:11.526150    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:11.527476    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:11.545623    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:12.016136    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:12.121075    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:12.121426    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:12.123391    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:12.514011    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:12.528538    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:12.529581    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:12.547275    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:13.012068    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:13.027284    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:13.028066    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:13.045943    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:13.464815    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:13.521784    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:13.540043    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:13.542168    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:13.559288    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:14.012932    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:14.031008    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:14.032956    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:14.046149    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:14.512308    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:14.529727    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:14.531132    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:14.545587    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:15.016505    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:15.043212    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:15.047887    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:15.050602    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:15.469154    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:15.512533    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:15.527081    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:15.529221    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:15.555002    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:16.012128    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:16.029375    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:16.030006    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:16.048582    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:16.512096    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:16.527515    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:16.529098    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:16.546056    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:17.013505    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:17.028133    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:17.029488    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:17.045705    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:17.512448    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:17.525985    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:17.526559    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:17.545164    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:17.963071    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:18.011935    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:18.027216    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:18.028014    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:18.045715    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:18.512198    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:18.527621    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:18.528388    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:18.546035    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:19.012538    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:19.027138    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:19.029107    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:19.049247    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:19.515245    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:19.528243    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:19.528933    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:19.612128    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:19.974572    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:20.014185    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:20.031069    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:20.034181    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:20.113988    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:20.512045    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:20.525634    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:20.526324    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:20.545780    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:21.011316    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:21.038255    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:21.039487    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:21.046681    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:21.512774    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:21.526238    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:21.527331    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:21.611218    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:22.012871    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:22.026396    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:22.026993    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:22.045072    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:22.465059    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:22.512249    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:22.527076    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:22.529809    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:22.546302    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:23.013252    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:23.029859    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:23.031069    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:23.050464    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:23.512727    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:23.527887    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:23.528493    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:23.545794    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:24.013248    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:24.035854    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:24.037042    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:24.050520    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:24.511961    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:24.525382    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:24.526438    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:24.545577    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:24.964317    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:25.011703    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:25.026649    8328 kapi.go:107] duration metric: took 1m24.505099943s to wait for kubernetes.io/minikube-addons=registry ...
	I1004 02:50:25.030573    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:25.046129    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:25.512084    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:25.526441    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:25.545818    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:26.011391    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:26.026758    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:26.046036    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:26.514027    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:26.527482    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:26.545867    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:26.965224    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:27.021182    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:27.029360    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:27.048961    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:27.519958    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:27.531205    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:27.555003    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:28.013433    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:28.027350    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:28.046215    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:28.513245    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:28.526668    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:28.546119    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:28.966977    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:29.012357    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:29.027809    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:29.047183    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:29.512235    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:29.526495    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:29.545755    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:30.016445    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:30.048081    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:30.049866    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:30.512919    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:30.527214    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:30.548261    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:31.012402    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:31.026801    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:31.046201    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:31.468779    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:31.511890    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:31.528259    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:31.545648    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:32.012953    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:32.025968    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:32.045260    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:32.514676    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:32.526333    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:32.615633    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:33.012108    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:33.111274    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:33.112761    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:33.511802    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:33.526351    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:33.545326    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:33.964291    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:34.012245    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:34.025871    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:34.045851    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:34.512665    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:34.526198    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:34.545566    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:35.012501    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:35.033436    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:35.046398    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:35.511832    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:35.525852    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:35.544971    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:36.018851    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:36.028975    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:36.050153    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:36.466267    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:36.514624    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:36.526158    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:36.544888    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:37.012270    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:37.026672    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:37.047216    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:37.512992    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:37.526023    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:37.547093    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:38.012401    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:38.027677    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:38.046498    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:38.471627    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:38.513963    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:38.526523    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:38.545758    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:39.018575    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:39.025757    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:39.046753    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:39.513344    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:39.526005    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:39.545749    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:40.031853    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:40.032073    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:40.045855    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:40.512528    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:40.527260    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:40.545443    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:40.963697    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:41.021267    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:41.026197    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:41.046550    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:41.511470    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:41.527030    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:41.546093    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:42.011538    8328 kapi.go:107] duration metric: took 1m41.004857372s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1004 02:50:42.026490    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:42.045421    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:42.526841    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:42.546135    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:42.963754    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:43.025960    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:43.051253    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:43.525524    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:43.545635    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:44.027046    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:44.045167    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:44.526087    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:44.544968    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:44.964098    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:45.038146    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:45.046479    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:45.525956    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:45.545462    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:46.026075    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:46.045133    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:46.526163    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:46.545077    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:46.964847    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:47.028073    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:47.045660    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:47.526944    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:47.547135    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:48.029196    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:48.046145    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:48.527796    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:48.547072    8328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:49.031299    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:49.127481    8328 kapi.go:107] duration metric: took 1m43.585550454s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1004 02:50:49.129332    8328 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-561541 cluster.
	I1004 02:50:49.131243    8328 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1004 02:50:49.132780    8328 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1004 02:50:49.464079    8328 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"False"
	I1004 02:50:49.530083    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:50.032164    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:50.485951    8328 pod_ready.go:93] pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace has status "Ready":"True"
	I1004 02:50:50.485978    8328 pod_ready.go:82] duration metric: took 1m8.528462024s for pod "metrics-server-84c5f94fbc-4hhst" in "kube-system" namespace to be "Ready" ...
	I1004 02:50:50.485990    8328 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-5nsmh" in "kube-system" namespace to be "Ready" ...
	I1004 02:50:50.495203    8328 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-5nsmh" in "kube-system" namespace has status "Ready":"True"
	I1004 02:50:50.495230    8328 pod_ready.go:82] duration metric: took 9.231804ms for pod "nvidia-device-plugin-daemonset-5nsmh" in "kube-system" namespace to be "Ready" ...
	I1004 02:50:50.495254    8328 pod_ready.go:39] duration metric: took 1m10.112497833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:50:50.495278    8328 api_server.go:52] waiting for apiserver process to appear ...
	I1004 02:50:50.495313    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 02:50:50.495380    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 02:50:50.545764    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:50.616884    8328 cri.go:89] found id: "94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4"
	I1004 02:50:50.616908    8328 cri.go:89] found id: ""
	I1004 02:50:50.616915    8328 logs.go:282] 1 containers: [94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4]
	I1004 02:50:50.616976    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:50:50.639179    8328 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 02:50:50.639255    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 02:50:50.977637    8328 cri.go:89] found id: "ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab"
	I1004 02:50:50.977662    8328 cri.go:89] found id: ""
	I1004 02:50:50.977670    8328 logs.go:282] 1 containers: [ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab]
	I1004 02:50:50.977732    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:50:50.981391    8328 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 02:50:50.981464    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 02:50:51.029735    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:51.121360    8328 cri.go:89] found id: "18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575"
	I1004 02:50:51.121387    8328 cri.go:89] found id: ""
	I1004 02:50:51.121395    8328 logs.go:282] 1 containers: [18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575]
	I1004 02:50:51.121450    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:50:51.128678    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 02:50:51.128751    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 02:50:51.367292    8328 cri.go:89] found id: "170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab"
	I1004 02:50:51.367311    8328 cri.go:89] found id: ""
	I1004 02:50:51.367318    8328 logs.go:282] 1 containers: [170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab]
	I1004 02:50:51.367371    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:50:51.383474    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 02:50:51.383547    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 02:50:51.470791    8328 cri.go:89] found id: "c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f"
	I1004 02:50:51.470813    8328 cri.go:89] found id: ""
	I1004 02:50:51.470821    8328 logs.go:282] 1 containers: [c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f]
	I1004 02:50:51.470874    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:50:51.474792    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 02:50:51.474876    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 02:50:51.526658    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:51.590749    8328 cri.go:89] found id: "6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10"
	I1004 02:50:51.590772    8328 cri.go:89] found id: ""
	I1004 02:50:51.590781    8328 logs.go:282] 1 containers: [6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10]
	I1004 02:50:51.590834    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:50:51.605782    8328 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 02:50:51.605856    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 02:50:51.709376    8328 cri.go:89] found id: "11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e"
	I1004 02:50:51.709402    8328 cri.go:89] found id: ""
	I1004 02:50:51.709410    8328 logs.go:282] 1 containers: [11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e]
	I1004 02:50:51.709465    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:50:51.714751    8328 logs.go:123] Gathering logs for describe nodes ...
	I1004 02:50:51.714777    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 02:50:52.040764    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:52.057444    8328 logs.go:123] Gathering logs for kube-proxy [c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f] ...
	I1004 02:50:52.057476    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f"
	I1004 02:50:52.229777    8328 logs.go:123] Gathering logs for kindnet [11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e] ...
	I1004 02:50:52.229805    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e"
	I1004 02:50:52.354308    8328 logs.go:123] Gathering logs for CRI-O ...
	I1004 02:50:52.354336    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 02:50:52.472238    8328 logs.go:123] Gathering logs for kubelet ...
	I1004 02:50:52.472274    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 02:50:52.538540    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1004 02:50:52.556669    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: W1004 02:48:57.726567    1504 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-561541' and this object
	W1004 02:50:52.556913    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: E1004 02:48:57.726640    1504 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:50:52.557090    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: W1004 02:48:57.731935    1504 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-561541' and this object
	W1004 02:50:52.557413    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: E1004 02:48:57.731984    1504 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:50:52.566276    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.077336    1504 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-561541" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-561541' and this object
	W1004 02:50:52.566493    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077383    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:50:52.566678    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.077463    1504 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-561541' and this object
	W1004 02:50:52.566900    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077498    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:50:52.567080    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.114008    1504 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-561541' and this object
	W1004 02:50:52.567309    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.114055    1504 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	I1004 02:50:52.606705    8328 logs.go:123] Gathering logs for dmesg ...
	I1004 02:50:52.606739    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 02:50:52.637501    8328 logs.go:123] Gathering logs for kube-apiserver [94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4] ...
	I1004 02:50:52.637526    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4"
	I1004 02:50:52.762777    8328 logs.go:123] Gathering logs for etcd [ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab] ...
	I1004 02:50:52.762857    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab"
	I1004 02:50:52.850377    8328 logs.go:123] Gathering logs for coredns [18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575] ...
	I1004 02:50:52.850459    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575"
	I1004 02:50:52.903295    8328 logs.go:123] Gathering logs for kube-scheduler [170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab] ...
	I1004 02:50:52.903384    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab"
	I1004 02:50:52.981682    8328 logs.go:123] Gathering logs for kube-controller-manager [6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10] ...
	I1004 02:50:52.981758    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10"
	I1004 02:50:53.026628    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:53.092318    8328 logs.go:123] Gathering logs for container status ...
	I1004 02:50:53.092354    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 02:50:53.195358    8328 out.go:358] Setting ErrFile to fd 2...
	I1004 02:50:53.195385    8328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1004 02:50:53.195435    8328 out.go:270] X Problems detected in kubelet:
	W1004 02:50:53.195452    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077383    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:50:53.195463    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.077463    1504 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-561541' and this object
	W1004 02:50:53.195474    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077498    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:50:53.195483    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.114008    1504 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-561541' and this object
	W1004 02:50:53.195489    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.114055    1504 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	I1004 02:50:53.195495    8328 out.go:358] Setting ErrFile to fd 2...
	I1004 02:50:53.195501    8328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:50:53.526044    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:54.027034    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:54.527896    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:55.027090    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:55.526608    8328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:56.031668    8328 kapi.go:107] duration metric: took 1m55.510137071s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1004 02:50:56.033861    8328 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, logviewer, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1004 02:50:56.035371    8328 addons.go:510] duration metric: took 2m2.34435321s for enable addons: enabled=[default-storageclass storage-provisioner nvidia-device-plugin cloud-spanner ingress-dns metrics-server logviewer inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1004 02:51:03.196777    8328 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 02:51:03.210660    8328 api_server.go:72] duration metric: took 2m9.520011882s to wait for apiserver process to appear ...
	I1004 02:51:03.210687    8328 api_server.go:88] waiting for apiserver healthz status ...
	I1004 02:51:03.210721    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 02:51:03.210783    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 02:51:03.250166    8328 cri.go:89] found id: "94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4"
	I1004 02:51:03.250193    8328 cri.go:89] found id: ""
	I1004 02:51:03.250201    8328 logs.go:282] 1 containers: [94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4]
	I1004 02:51:03.250255    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:03.253725    8328 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 02:51:03.253797    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 02:51:03.293934    8328 cri.go:89] found id: "ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab"
	I1004 02:51:03.293956    8328 cri.go:89] found id: ""
	I1004 02:51:03.293964    8328 logs.go:282] 1 containers: [ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab]
	I1004 02:51:03.294023    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:03.297421    8328 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 02:51:03.297493    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 02:51:03.335321    8328 cri.go:89] found id: "18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575"
	I1004 02:51:03.335342    8328 cri.go:89] found id: ""
	I1004 02:51:03.335349    8328 logs.go:282] 1 containers: [18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575]
	I1004 02:51:03.335410    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:03.338795    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 02:51:03.338873    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 02:51:03.379250    8328 cri.go:89] found id: "170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab"
	I1004 02:51:03.379273    8328 cri.go:89] found id: ""
	I1004 02:51:03.379282    8328 logs.go:282] 1 containers: [170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab]
	I1004 02:51:03.379336    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:03.382822    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 02:51:03.382894    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 02:51:03.421723    8328 cri.go:89] found id: "c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f"
	I1004 02:51:03.421748    8328 cri.go:89] found id: ""
	I1004 02:51:03.421756    8328 logs.go:282] 1 containers: [c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f]
	I1004 02:51:03.421812    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:03.425066    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 02:51:03.425138    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 02:51:03.462203    8328 cri.go:89] found id: "6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10"
	I1004 02:51:03.462236    8328 cri.go:89] found id: ""
	I1004 02:51:03.462244    8328 logs.go:282] 1 containers: [6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10]
	I1004 02:51:03.462300    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:03.465754    8328 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 02:51:03.465825    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 02:51:03.504988    8328 cri.go:89] found id: "11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e"
	I1004 02:51:03.505011    8328 cri.go:89] found id: ""
	I1004 02:51:03.505019    8328 logs.go:282] 1 containers: [11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e]
	I1004 02:51:03.505076    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:03.508568    8328 logs.go:123] Gathering logs for kube-scheduler [170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab] ...
	I1004 02:51:03.508602    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab"
	I1004 02:51:03.552151    8328 logs.go:123] Gathering logs for kube-controller-manager [6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10] ...
	I1004 02:51:03.552181    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10"
	I1004 02:51:03.624643    8328 logs.go:123] Gathering logs for container status ...
	I1004 02:51:03.624679    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 02:51:03.685963    8328 logs.go:123] Gathering logs for kubelet ...
	I1004 02:51:03.685990    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1004 02:51:03.747485    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: W1004 02:48:57.726567    1504 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-561541' and this object
	W1004 02:51:03.747726    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: E1004 02:48:57.726640    1504 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:03.747905    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: W1004 02:48:57.731935    1504 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-561541' and this object
	W1004 02:51:03.748120    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: E1004 02:48:57.731984    1504 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:03.756652    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.077336    1504 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-561541" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-561541' and this object
	W1004 02:51:03.756858    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077383    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:03.757039    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.077463    1504 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-561541' and this object
	W1004 02:51:03.757266    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077498    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:03.757445    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.114008    1504 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-561541' and this object
	W1004 02:51:03.757667    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.114055    1504 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	I1004 02:51:03.795796    8328 logs.go:123] Gathering logs for etcd [ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab] ...
	I1004 02:51:03.795817    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab"
	I1004 02:51:03.856192    8328 logs.go:123] Gathering logs for coredns [18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575] ...
	I1004 02:51:03.856228    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575"
	I1004 02:51:03.908105    8328 logs.go:123] Gathering logs for kube-proxy [c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f] ...
	I1004 02:51:03.908142    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f"
	I1004 02:51:03.952594    8328 logs.go:123] Gathering logs for kindnet [11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e] ...
	I1004 02:51:03.952621    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e"
	I1004 02:51:03.994740    8328 logs.go:123] Gathering logs for CRI-O ...
	I1004 02:51:03.994767    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 02:51:04.088169    8328 logs.go:123] Gathering logs for dmesg ...
	I1004 02:51:04.088207    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 02:51:04.101739    8328 logs.go:123] Gathering logs for describe nodes ...
	I1004 02:51:04.101767    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 02:51:04.239828    8328 logs.go:123] Gathering logs for kube-apiserver [94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4] ...
	I1004 02:51:04.239865    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4"
	I1004 02:51:04.292880    8328 out.go:358] Setting ErrFile to fd 2...
	I1004 02:51:04.292907    8328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1004 02:51:04.292987    8328 out.go:270] X Problems detected in kubelet:
	W1004 02:51:04.293750    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077383    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:04.293773    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.077463    1504 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-561541' and this object
	W1004 02:51:04.293781    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077498    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:04.293788    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.114008    1504 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-561541' and this object
	W1004 02:51:04.293796    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.114055    1504 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	I1004 02:51:04.293809    8328 out.go:358] Setting ErrFile to fd 2...
	I1004 02:51:04.293825    8328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:51:14.295472    8328 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 02:51:14.302959    8328 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1004 02:51:14.304692    8328 api_server.go:141] control plane version: v1.31.1
	I1004 02:51:14.304717    8328 api_server.go:131] duration metric: took 11.094022067s to wait for apiserver health ...
	I1004 02:51:14.304735    8328 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 02:51:14.304758    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 02:51:14.304828    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 02:51:14.357990    8328 cri.go:89] found id: "94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4"
	I1004 02:51:14.358021    8328 cri.go:89] found id: ""
	I1004 02:51:14.358029    8328 logs.go:282] 1 containers: [94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4]
	I1004 02:51:14.358083    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:14.362819    8328 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 02:51:14.362892    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 02:51:14.416154    8328 cri.go:89] found id: "ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab"
	I1004 02:51:14.416179    8328 cri.go:89] found id: ""
	I1004 02:51:14.416188    8328 logs.go:282] 1 containers: [ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab]
	I1004 02:51:14.416240    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:14.419531    8328 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 02:51:14.419601    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 02:51:14.465469    8328 cri.go:89] found id: "18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575"
	I1004 02:51:14.465492    8328 cri.go:89] found id: ""
	I1004 02:51:14.465500    8328 logs.go:282] 1 containers: [18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575]
	I1004 02:51:14.465562    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:14.469176    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 02:51:14.469271    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 02:51:14.506010    8328 cri.go:89] found id: "170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab"
	I1004 02:51:14.506030    8328 cri.go:89] found id: ""
	I1004 02:51:14.506037    8328 logs.go:282] 1 containers: [170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab]
	I1004 02:51:14.506095    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:14.509521    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 02:51:14.509587    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 02:51:14.545799    8328 cri.go:89] found id: "c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f"
	I1004 02:51:14.545821    8328 cri.go:89] found id: ""
	I1004 02:51:14.545829    8328 logs.go:282] 1 containers: [c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f]
	I1004 02:51:14.545883    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:14.549163    8328 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 02:51:14.549285    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 02:51:14.586324    8328 cri.go:89] found id: "6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10"
	I1004 02:51:14.586391    8328 cri.go:89] found id: ""
	I1004 02:51:14.586407    8328 logs.go:282] 1 containers: [6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10]
	I1004 02:51:14.586476    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:14.589894    8328 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 02:51:14.589988    8328 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 02:51:14.627138    8328 cri.go:89] found id: "11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e"
	I1004 02:51:14.627161    8328 cri.go:89] found id: ""
	I1004 02:51:14.627168    8328 logs.go:282] 1 containers: [11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e]
	I1004 02:51:14.627241    8328 ssh_runner.go:195] Run: which crictl
	I1004 02:51:14.630613    8328 logs.go:123] Gathering logs for etcd [ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab] ...
	I1004 02:51:14.630638    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab"
	I1004 02:51:14.702143    8328 logs.go:123] Gathering logs for coredns [18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575] ...
	I1004 02:51:14.702175    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575"
	I1004 02:51:14.741646    8328 logs.go:123] Gathering logs for kube-proxy [c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f] ...
	I1004 02:51:14.741673    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f"
	I1004 02:51:14.779592    8328 logs.go:123] Gathering logs for kube-controller-manager [6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10] ...
	I1004 02:51:14.779623    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10"
	I1004 02:51:14.865327    8328 logs.go:123] Gathering logs for kindnet [11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e] ...
	I1004 02:51:14.865407    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e"
	I1004 02:51:14.910177    8328 logs.go:123] Gathering logs for dmesg ...
	I1004 02:51:14.910210    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 02:51:14.929759    8328 logs.go:123] Gathering logs for describe nodes ...
	I1004 02:51:14.929789    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 02:51:15.089176    8328 logs.go:123] Gathering logs for kube-apiserver [94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4] ...
	I1004 02:51:15.090422    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4"
	I1004 02:51:15.179936    8328 logs.go:123] Gathering logs for container status ...
	I1004 02:51:15.179970    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 02:51:15.245521    8328 logs.go:123] Gathering logs for kubelet ...
	I1004 02:51:15.245551    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1004 02:51:15.313004    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: W1004 02:48:57.726567    1504 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-561541' and this object
	W1004 02:51:15.313287    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: E1004 02:48:57.726640    1504 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:15.313486    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: W1004 02:48:57.731935    1504 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-561541' and this object
	W1004 02:51:15.313707    8328 logs.go:138] Found kubelet problem: Oct 04 02:48:57 addons-561541 kubelet[1504]: E1004 02:48:57.731984    1504 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:15.322283    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.077336    1504 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-561541" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-561541' and this object
	W1004 02:51:15.322498    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077383    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:15.322680    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.077463    1504 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-561541' and this object
	W1004 02:51:15.322903    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077498    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:15.323085    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.114008    1504 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-561541' and this object
	W1004 02:51:15.323305    8328 logs.go:138] Found kubelet problem: Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.114055    1504 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	I1004 02:51:15.363628    8328 logs.go:123] Gathering logs for kube-scheduler [170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab] ...
	I1004 02:51:15.363655    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab"
	I1004 02:51:15.407886    8328 logs.go:123] Gathering logs for CRI-O ...
	I1004 02:51:15.407920    8328 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 02:51:15.501398    8328 out.go:358] Setting ErrFile to fd 2...
	I1004 02:51:15.501427    8328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1004 02:51:15.501501    8328 out.go:270] X Problems detected in kubelet:
	W1004 02:51:15.501516    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077383    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:15.501533    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.077463    1504 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-561541' and this object
	W1004 02:51:15.501556    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.077498    1504 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	W1004 02:51:15.501571    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: W1004 02:49:40.114008    1504 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-561541" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-561541' and this object
	W1004 02:51:15.501596    8328 out.go:270]   Oct 04 02:49:40 addons-561541 kubelet[1504]: E1004 02:49:40.114055    1504 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-561541\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-561541' and this object" logger="UnhandledError"
	I1004 02:51:15.501603    8328 out.go:358] Setting ErrFile to fd 2...
	I1004 02:51:15.501616    8328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:51:25.513781    8328 system_pods.go:59] 19 kube-system pods found
	I1004 02:51:25.513822    8328 system_pods.go:61] "coredns-7c65d6cfc9-l72ll" [c7bdc99e-d5d1-449c-968d-9cbcbe5d3883] Running
	I1004 02:51:25.513829    8328 system_pods.go:61] "csi-hostpath-attacher-0" [5d9da269-be6d-4ac1-bdfb-dc06753ed143] Running
	I1004 02:51:25.513835    8328 system_pods.go:61] "csi-hostpath-resizer-0" [a74cf24a-1485-4e39-8fe2-61eb95046f16] Running
	I1004 02:51:25.513840    8328 system_pods.go:61] "csi-hostpathplugin-2kf2t" [d2bbfca4-9688-425d-bdcd-97cc2c84c619] Running
	I1004 02:51:25.513845    8328 system_pods.go:61] "etcd-addons-561541" [a0edb3d8-1cb5-406e-aee7-b7a3163a557d] Running
	I1004 02:51:25.513849    8328 system_pods.go:61] "kindnet-7tqxs" [23685d4c-c7f4-4c2a-bbd6-ab4f572a2a2a] Running
	I1004 02:51:25.513854    8328 system_pods.go:61] "kube-apiserver-addons-561541" [97914951-e72b-45ba-b901-1142d3c9b967] Running
	I1004 02:51:25.513858    8328 system_pods.go:61] "kube-controller-manager-addons-561541" [db37bb5d-abd2-4e6c-a469-58176fe06cb9] Running
	I1004 02:51:25.513868    8328 system_pods.go:61] "kube-ingress-dns-minikube" [40574e9c-4112-4693-9361-ac3a76c1f048] Running
	I1004 02:51:25.513872    8328 system_pods.go:61] "kube-proxy-hrkf9" [6c693613-dcb1-4111-87d2-936d6b82b963] Running
	I1004 02:51:25.513879    8328 system_pods.go:61] "kube-scheduler-addons-561541" [48973e95-fe2f-4074-a1bf-7afb482c6609] Running
	I1004 02:51:25.513883    8328 system_pods.go:61] "logviewer-7c79c8bcc9-2b554" [75a7f403-12b6-4f98-b0af-8bf7c3aa0ab1] Running
	I1004 02:51:25.513894    8328 system_pods.go:61] "metrics-server-84c5f94fbc-4hhst" [7577c62c-151a-4a09-91f6-abd270367e65] Running
	I1004 02:51:25.513900    8328 system_pods.go:61] "nvidia-device-plugin-daemonset-5nsmh" [417c82a7-a3be-4373-b14a-9d52e4aaa1d2] Running
	I1004 02:51:25.513905    8328 system_pods.go:61] "registry-66c9cd494c-lc5j7" [d1434ec1-9246-4eec-97cd-0ae38734e96e] Running
	I1004 02:51:25.513915    8328 system_pods.go:61] "registry-proxy-2kl22" [ee49d77e-84c1-4b75-b458-f901291a1eb8] Running
	I1004 02:51:25.513919    8328 system_pods.go:61] "snapshot-controller-56fcc65765-wwg4w" [706139ae-1a4c-44b6-b2bd-14c48c7d9286] Running
	I1004 02:51:25.513923    8328 system_pods.go:61] "snapshot-controller-56fcc65765-x9vhd" [3b89f1fe-763d-49f4-b65e-d0536fdd2293] Running
	I1004 02:51:25.513929    8328 system_pods.go:61] "storage-provisioner" [15c3949d-928d-4cd9-9c7f-828971d88260] Running
	I1004 02:51:25.513936    8328 system_pods.go:74] duration metric: took 11.209194297s to wait for pod list to return data ...
	I1004 02:51:25.513946    8328 default_sa.go:34] waiting for default service account to be created ...
	I1004 02:51:25.516771    8328 default_sa.go:45] found service account: "default"
	I1004 02:51:25.516795    8328 default_sa.go:55] duration metric: took 2.84218ms for default service account to be created ...
	I1004 02:51:25.516804    8328 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 02:51:25.527170    8328 system_pods.go:86] 19 kube-system pods found
	I1004 02:51:25.527249    8328 system_pods.go:89] "coredns-7c65d6cfc9-l72ll" [c7bdc99e-d5d1-449c-968d-9cbcbe5d3883] Running
	I1004 02:51:25.527268    8328 system_pods.go:89] "csi-hostpath-attacher-0" [5d9da269-be6d-4ac1-bdfb-dc06753ed143] Running
	I1004 02:51:25.527274    8328 system_pods.go:89] "csi-hostpath-resizer-0" [a74cf24a-1485-4e39-8fe2-61eb95046f16] Running
	I1004 02:51:25.527279    8328 system_pods.go:89] "csi-hostpathplugin-2kf2t" [d2bbfca4-9688-425d-bdcd-97cc2c84c619] Running
	I1004 02:51:25.527284    8328 system_pods.go:89] "etcd-addons-561541" [a0edb3d8-1cb5-406e-aee7-b7a3163a557d] Running
	I1004 02:51:25.527289    8328 system_pods.go:89] "kindnet-7tqxs" [23685d4c-c7f4-4c2a-bbd6-ab4f572a2a2a] Running
	I1004 02:51:25.527293    8328 system_pods.go:89] "kube-apiserver-addons-561541" [97914951-e72b-45ba-b901-1142d3c9b967] Running
	I1004 02:51:25.527298    8328 system_pods.go:89] "kube-controller-manager-addons-561541" [db37bb5d-abd2-4e6c-a469-58176fe06cb9] Running
	I1004 02:51:25.527324    8328 system_pods.go:89] "kube-ingress-dns-minikube" [40574e9c-4112-4693-9361-ac3a76c1f048] Running
	I1004 02:51:25.527335    8328 system_pods.go:89] "kube-proxy-hrkf9" [6c693613-dcb1-4111-87d2-936d6b82b963] Running
	I1004 02:51:25.527340    8328 system_pods.go:89] "kube-scheduler-addons-561541" [48973e95-fe2f-4074-a1bf-7afb482c6609] Running
	I1004 02:51:25.527344    8328 system_pods.go:89] "logviewer-7c79c8bcc9-2b554" [75a7f403-12b6-4f98-b0af-8bf7c3aa0ab1] Running
	I1004 02:51:25.527361    8328 system_pods.go:89] "metrics-server-84c5f94fbc-4hhst" [7577c62c-151a-4a09-91f6-abd270367e65] Running
	I1004 02:51:25.527372    8328 system_pods.go:89] "nvidia-device-plugin-daemonset-5nsmh" [417c82a7-a3be-4373-b14a-9d52e4aaa1d2] Running
	I1004 02:51:25.527376    8328 system_pods.go:89] "registry-66c9cd494c-lc5j7" [d1434ec1-9246-4eec-97cd-0ae38734e96e] Running
	I1004 02:51:25.527380    8328 system_pods.go:89] "registry-proxy-2kl22" [ee49d77e-84c1-4b75-b458-f901291a1eb8] Running
	I1004 02:51:25.527390    8328 system_pods.go:89] "snapshot-controller-56fcc65765-wwg4w" [706139ae-1a4c-44b6-b2bd-14c48c7d9286] Running
	I1004 02:51:25.527396    8328 system_pods.go:89] "snapshot-controller-56fcc65765-x9vhd" [3b89f1fe-763d-49f4-b65e-d0536fdd2293] Running
	I1004 02:51:25.527400    8328 system_pods.go:89] "storage-provisioner" [15c3949d-928d-4cd9-9c7f-828971d88260] Running
	I1004 02:51:25.527411    8328 system_pods.go:126] duration metric: took 10.600827ms to wait for k8s-apps to be running ...
	I1004 02:51:25.527423    8328 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 02:51:25.527509    8328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:51:25.539403    8328 system_svc.go:56] duration metric: took 11.970586ms WaitForService to wait for kubelet
	I1004 02:51:25.539435    8328 kubeadm.go:582] duration metric: took 2m31.848792494s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 02:51:25.539454    8328 node_conditions.go:102] verifying NodePressure condition ...
	I1004 02:51:25.543001    8328 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1004 02:51:25.543041    8328 node_conditions.go:123] node cpu capacity is 2
	I1004 02:51:25.543058    8328 node_conditions.go:105] duration metric: took 3.597644ms to run NodePressure ...
	I1004 02:51:25.543071    8328 start.go:241] waiting for startup goroutines ...
	I1004 02:51:25.543079    8328 start.go:246] waiting for cluster config update ...
	I1004 02:51:25.543097    8328 start.go:255] writing updated cluster config ...
	I1004 02:51:25.543864    8328 ssh_runner.go:195] Run: rm -f paused
	I1004 02:51:25.854517    8328 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 02:51:25.857954    8328 out.go:177] * Done! kubectl is now configured to use "addons-561541" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.126525265Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-vspxr Namespace:ingress-nginx ID:5269f988411362bccce66f04b934df84a0d26bdc4ce45c6c8b007261da475bf5 UID:fb0209b4-34b0-43b4-9c5a-b422998396ae NetNS:/var/run/netns/d1223019-309e-4e3d-ad32-28ca36374a15 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.126660433Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-vspxr from CNI network \"kindnet\" (type=ptp)"
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.150820425Z" level=info msg="Stopped pod sandbox: 5269f988411362bccce66f04b934df84a0d26bdc4ce45c6c8b007261da475bf5" id=f2be93a7-b92b-4ab4-a18c-18b0bea7db95 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.290428591Z" level=info msg="Removing container: c628af59171a66a46b3c272b313386644d948938c6db8db16f27f200e721585f" id=dff9d2c3-cb0c-4ec3-b730-5ed7e6bf6970 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.307262201Z" level=info msg="Removed container c628af59171a66a46b3c272b313386644d948938c6db8db16f27f200e721585f: ingress-nginx/ingress-nginx-controller-bc57996ff-vspxr/controller" id=dff9d2c3-cb0c-4ec3-b730-5ed7e6bf6970 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.379984348Z" level=info msg="Removing container: 46b497acff809389484209b96ca35bced939c78c5affb9d74668aad6d7e83cd5" id=6c3b3cff-e2a5-4dc5-93e4-a9fd09674911 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.396916557Z" level=info msg="Removed container 46b497acff809389484209b96ca35bced939c78c5affb9d74668aad6d7e83cd5: ingress-nginx/ingress-nginx-admission-patch-djgkm/patch" id=6c3b3cff-e2a5-4dc5-93e4-a9fd09674911 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.398199981Z" level=info msg="Removing container: e514372bb7c619701927f3c2919324823e85459257d1df59217e73e57dc24d56" id=e28d7b34-42de-4786-9e81-7274cf300c58 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.415748529Z" level=info msg="Removed container e514372bb7c619701927f3c2919324823e85459257d1df59217e73e57dc24d56: ingress-nginx/ingress-nginx-admission-create-v956v/create" id=e28d7b34-42de-4786-9e81-7274cf300c58 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.417081576Z" level=info msg="Stopping pod sandbox: 5269f988411362bccce66f04b934df84a0d26bdc4ce45c6c8b007261da475bf5" id=97024c8d-1588-4ce7-a396-a9cea8be90cb name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.417120271Z" level=info msg="Stopped pod sandbox (already stopped): 5269f988411362bccce66f04b934df84a0d26bdc4ce45c6c8b007261da475bf5" id=97024c8d-1588-4ce7-a396-a9cea8be90cb name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.417444710Z" level=info msg="Removing pod sandbox: 5269f988411362bccce66f04b934df84a0d26bdc4ce45c6c8b007261da475bf5" id=f8092fff-5eb3-42b7-bf94-50726d51a1b1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.427430672Z" level=info msg="Removed pod sandbox: 5269f988411362bccce66f04b934df84a0d26bdc4ce45c6c8b007261da475bf5" id=f8092fff-5eb3-42b7-bf94-50726d51a1b1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.427911522Z" level=info msg="Stopping pod sandbox: 9b56dfde07b0527515f69ce77101e1bb24f387e1f8d5e86dc3bd108ee41d22bf" id=c4ab9c00-f37f-4500-9780-dab7ab65a912 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.428015060Z" level=info msg="Stopped pod sandbox (already stopped): 9b56dfde07b0527515f69ce77101e1bb24f387e1f8d5e86dc3bd108ee41d22bf" id=c4ab9c00-f37f-4500-9780-dab7ab65a912 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.428342388Z" level=info msg="Removing pod sandbox: 9b56dfde07b0527515f69ce77101e1bb24f387e1f8d5e86dc3bd108ee41d22bf" id=204e8152-fa3f-480f-a608-a32fc77de7b6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.438744127Z" level=info msg="Removed pod sandbox: 9b56dfde07b0527515f69ce77101e1bb24f387e1f8d5e86dc3bd108ee41d22bf" id=204e8152-fa3f-480f-a608-a32fc77de7b6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.439283043Z" level=info msg="Stopping pod sandbox: 701673e6b651e73dde4ca43f5d32c899638d3163b3ae3318ec4d64cf3e36bc5b" id=f759c5aa-1b6c-472b-801b-b54bfadf76ef name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.439323067Z" level=info msg="Stopped pod sandbox (already stopped): 701673e6b651e73dde4ca43f5d32c899638d3163b3ae3318ec4d64cf3e36bc5b" id=f759c5aa-1b6c-472b-801b-b54bfadf76ef name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.439606047Z" level=info msg="Removing pod sandbox: 701673e6b651e73dde4ca43f5d32c899638d3163b3ae3318ec4d64cf3e36bc5b" id=f3f42a9e-34d1-48a8-bdad-c0105014bb3a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.448665647Z" level=info msg="Removed pod sandbox: 701673e6b651e73dde4ca43f5d32c899638d3163b3ae3318ec4d64cf3e36bc5b" id=f3f42a9e-34d1-48a8-bdad-c0105014bb3a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.449250905Z" level=info msg="Stopping pod sandbox: 0f843618adee26e89bd53bb0a6f07b745fc2a95120bc79052f4787bb16925ddb" id=7c82c2ee-70c3-46ed-ba91-fe5a1b7320ca name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.449288328Z" level=info msg="Stopped pod sandbox (already stopped): 0f843618adee26e89bd53bb0a6f07b745fc2a95120bc79052f4787bb16925ddb" id=7c82c2ee-70c3-46ed-ba91-fe5a1b7320ca name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.449680819Z" level=info msg="Removing pod sandbox: 0f843618adee26e89bd53bb0a6f07b745fc2a95120bc79052f4787bb16925ddb" id=ccea9e4e-6757-44c1-8127-5c0286027037 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 04 03:03:49 addons-561541 crio[964]: time="2024-10-04 03:03:49.459541934Z" level=info msg="Removed pod sandbox: 0f843618adee26e89bd53bb0a6f07b745fc2a95120bc79052f4787bb16925ddb" id=ccea9e4e-6757-44c1-8127-5c0286027037 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	84287d5c8ebe4       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   70bcb1cfc17cb       hello-world-app-55bf9c44b4-n76qr
	62a8b0a39d4c5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     4 minutes ago       Running             busybox                   0                   03e8b76cf2ca6       busybox
	d1ae91ba20f39       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         5 minutes ago       Running             nginx                     0                   7c2a31b9c5f25       nginx
	57663e10f8fb7       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   17 minutes ago      Running             metrics-server            0                   6e671a3f83f2a       metrics-server-84c5f94fbc-4hhst
	0d15cf46332c6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        17 minutes ago      Running             storage-provisioner       0                   f82a1d1b6fc23       storage-provisioner
	18fa390b6a898       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                        17 minutes ago      Running             coredns                   0                   b1697c3bbe9ac       coredns-7c65d6cfc9-l72ll
	c090785615f89       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                        18 minutes ago      Running             kube-proxy                0                   d61de0f8c591f       kube-proxy-hrkf9
	11c9fccd22a80       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                        18 minutes ago      Running             kindnet-cni               0                   2bce0fd8db1f3       kindnet-7tqxs
	170502ec13419       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                        18 minutes ago      Running             kube-scheduler            0                   9b67e90846ea3       kube-scheduler-addons-561541
	94872964dd248       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                        18 minutes ago      Running             kube-apiserver            0                   5cc52b5d8789f       kube-apiserver-addons-561541
	6ae364e85e983       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                        18 minutes ago      Running             kube-controller-manager   0                   5351cdc98634d       kube-controller-manager-addons-561541
	ce90142154888       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        18 minutes ago      Running             etcd                      0                   6c867bd953225       etcd-addons-561541
	
	
	==> coredns [18fa390b6a898b58a60f3ccaa506a4216fda29b66b902b3c056007bfa5ded575] <==
	[INFO] 10.244.0.20:38500 - 8222 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061775s
	[INFO] 10.244.0.20:60747 - 49862 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004016585s
	[INFO] 10.244.0.20:38500 - 56231 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001494348s
	[INFO] 10.244.0.20:38500 - 49385 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.017153905s
	[INFO] 10.244.0.20:60747 - 64809 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.018423151s
	[INFO] 10.244.0.20:38500 - 18691 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000113162s
	[INFO] 10.244.0.20:60747 - 19534 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000072351s
	[INFO] 10.244.0.20:49488 - 57718 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000125191s
	[INFO] 10.244.0.20:33039 - 35362 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000059141s
	[INFO] 10.244.0.20:49488 - 48456 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000072958s
	[INFO] 10.244.0.20:33039 - 19515 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000049492s
	[INFO] 10.244.0.20:49488 - 58741 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045997s
	[INFO] 10.244.0.20:33039 - 26359 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000041255s
	[INFO] 10.244.0.20:49488 - 38961 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000048229s
	[INFO] 10.244.0.20:33039 - 8551 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000039802s
	[INFO] 10.244.0.20:49488 - 59882 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037784s
	[INFO] 10.244.0.20:49488 - 1853 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040016s
	[INFO] 10.244.0.20:33039 - 29061 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039991s
	[INFO] 10.244.0.20:33039 - 18929 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042099s
	[INFO] 10.244.0.20:49488 - 15020 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00189086s
	[INFO] 10.244.0.20:33039 - 40128 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001788963s
	[INFO] 10.244.0.20:33039 - 64799 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001904447s
	[INFO] 10.244.0.20:49488 - 51750 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001365153s
	[INFO] 10.244.0.20:49488 - 21489 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000070882s
	[INFO] 10.244.0.20:33039 - 50485 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000068495s
	
	
	==> describe nodes <==
	Name:               addons-561541
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-561541
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=addons-561541
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T02_48_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-561541
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 02:48:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-561541
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:07:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:03:57 +0000   Fri, 04 Oct 2024 02:48:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:03:57 +0000   Fri, 04 Oct 2024 02:48:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:03:57 +0000   Fri, 04 Oct 2024 02:48:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:03:57 +0000   Fri, 04 Oct 2024 02:49:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-561541
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 59dba9086f574e6484a9ea8720d3047f
	  System UUID:                1842af5d-2609-4c57-90a7-654220d497e5
	  Boot ID:                    cc975b9c-d4f7-443e-a63b-68cdfd7ad286
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     hello-world-app-55bf9c44b4-n76qr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 coredns-7c65d6cfc9-l72ll                 100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     18m
	  kube-system                 etcd-addons-561541                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         18m
	  kube-system                 kindnet-7tqxs                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      18m
	  kube-system                 kube-apiserver-addons-561541             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-addons-561541    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-hrkf9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-addons-561541             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 metrics-server-84c5f94fbc-4hhst          100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         18m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 18m                kube-proxy       
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node addons-561541 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node addons-561541 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node addons-561541 status is now: NodeHasSufficientPID
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  18m                kubelet          Node addons-561541 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m                kubelet          Node addons-561541 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m                kubelet          Node addons-561541 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           18m                node-controller  Node addons-561541 event: Registered Node addons-561541 in Controller
	  Normal   NodeReady                17m                kubelet          Node addons-561541 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 4 02:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015570] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.529270] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.049348] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.015318] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.608453] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.834894] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [ce90142154888722e725939e0325f3895c7c4ab3b884c9fac16f97eb098d36ab] <==
	{"level":"warn","ts":"2024-10-04T02:48:57.035221Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.494624ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T02:48:57.035277Z","caller":"traceutil/trace.go:171","msg":"trace[1322537289] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:363; }","duration":"280.559526ms","start":"2024-10-04T02:48:56.754704Z","end":"2024-10-04T02:48:57.035264Z","steps":["trace[1322537289] 'agreement among raft nodes before linearized reading'  (duration: 280.47014ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T02:48:57.077564Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"415.267408ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T02:48:57.081904Z","caller":"traceutil/trace.go:171","msg":"trace[1993550185] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:363; }","duration":"419.629538ms","start":"2024-10-04T02:48:56.662258Z","end":"2024-10-04T02:48:57.081888Z","steps":["trace[1993550185] 'agreement among raft nodes before linearized reading'  (duration: 415.227762ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T02:48:57.015417Z","caller":"traceutil/trace.go:171","msg":"trace[599140919] linearizableReadLoop","detail":"{readStateIndex:374; appliedIndex:371; }","duration":"229.568105ms","start":"2024-10-04T02:48:56.785823Z","end":"2024-10-04T02:48:57.015391Z","steps":["trace[599140919] 'read index received'  (duration: 112.512183ms)","trace[599140919] 'applied index is now lower than readState.Index'  (duration: 117.055217ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T02:48:57.082261Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"528.280312ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-addons-561541\" ","response":"range_response_count:1 size:7253"}
	{"level":"info","ts":"2024-10-04T02:48:57.082298Z","caller":"traceutil/trace.go:171","msg":"trace[1246009146] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-addons-561541; range_end:; response_count:1; response_revision:363; }","duration":"528.326318ms","start":"2024-10-04T02:48:56.553962Z","end":"2024-10-04T02:48:57.082289Z","steps":["trace[1246009146] 'agreement among raft nodes before linearized reading'  (duration: 505.355102ms)","trace[1246009146] 'range keys from in-memory index tree'  (duration: 22.883725ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T02:48:57.082323Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T02:48:56.553943Z","time spent":"528.373719ms","remote":"127.0.0.1:48012","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":1,"response size":7277,"request content":"key:\"/registry/pods/kube-system/kube-controller-manager-addons-561541\" "}
	{"level":"warn","ts":"2024-10-04T02:48:57.082617Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T02:48:56.662237Z","time spent":"419.717225ms","remote":"127.0.0.1:47856","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":0,"response size":29,"request content":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 "}
	{"level":"warn","ts":"2024-10-04T02:48:57.059431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"397.095512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T02:48:57.170877Z","caller":"traceutil/trace.go:171","msg":"trace[671757903] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:363; }","duration":"508.543403ms","start":"2024-10-04T02:48:56.662309Z","end":"2024-10-04T02:48:57.170852Z","steps":["trace[671757903] 'agreement among raft nodes before linearized reading'  (duration: 397.07632ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T02:48:57.171584Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T02:48:56.662296Z","time spent":"509.260671ms","remote":"127.0.0.1:48210","response type":"/etcdserverpb.KV/Range","request count":0,"request size":35,"response count":0,"response size":29,"request content":"key:\"/registry/storageclasses/standard\" "}
	{"level":"warn","ts":"2024-10-04T02:48:57.285863Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.369755ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T02:48:57.287128Z","caller":"traceutil/trace.go:171","msg":"trace[52719835] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:363; }","duration":"179.623388ms","start":"2024-10-04T02:48:57.107488Z","end":"2024-10-04T02:48:57.287111Z","steps":["trace[52719835] 'agreement among raft nodes before linearized reading'  (duration: 50.770164ms)","trace[52719835] 'range keys from in-memory index tree'  (duration: 127.456192ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T02:48:57.285828Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"731.634653ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T02:48:57.288049Z","caller":"traceutil/trace.go:171","msg":"trace[1368450268] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:363; }","duration":"733.864332ms","start":"2024-10-04T02:48:56.554173Z","end":"2024-10-04T02:48:57.288037Z","steps":["trace[1368450268] 'agreement among raft nodes before linearized reading'  (duration: 647.255534ms)","trace[1368450268] 'range keys from in-memory index tree'  (duration: 84.3691ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T02:48:57.288663Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T02:48:56.554132Z","time spent":"734.516298ms","remote":"127.0.0.1:48030","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":0,"response size":29,"request content":"key:\"/registry/serviceaccounts\" limit:1 "}
	{"level":"warn","ts":"2024-10-04T02:48:57.287286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.453443ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T02:48:57.289173Z","caller":"traceutil/trace.go:171","msg":"trace[628025940] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:363; }","duration":"103.346355ms","start":"2024-10-04T02:48:57.185817Z","end":"2024-10-04T02:48:57.289163Z","steps":["trace[628025940] 'range keys from in-memory index tree'  (duration: 101.286553ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T02:58:44.648713Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1550}
	{"level":"info","ts":"2024-10-04T02:58:44.681113Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1550,"took":"31.806076ms","hash":4174400111,"current-db-size-bytes":6389760,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":3305472,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-10-04T02:58:44.681164Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4174400111,"revision":1550,"compact-revision":-1}
	{"level":"info","ts":"2024-10-04T03:03:44.659510Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1963}
	{"level":"info","ts":"2024-10-04T03:03:44.683172Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1963,"took":"23.153533ms","hash":2693781789,"current-db-size-bytes":6389760,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":5083136,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2024-10-04T03:03:44.683219Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2693781789,"revision":1963,"compact-revision":1550}
	
	
	==> kernel <==
	 03:07:04 up 49 min,  0 users,  load average: 0.05, 0.24, 0.30
	Linux addons-561541 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [11c9fccd22a80d9caa15155d0648ed64394dad1ef8a7f14a96f75404be5d649e] <==
	I1004 03:04:59.609456       1 main.go:299] handling current node
	I1004 03:05:09.609668       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:05:09.609700       1 main.go:299] handling current node
	I1004 03:05:19.609755       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:05:19.609789       1 main.go:299] handling current node
	I1004 03:05:29.611314       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:05:29.611353       1 main.go:299] handling current node
	I1004 03:05:39.610875       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:05:39.610907       1 main.go:299] handling current node
	I1004 03:05:49.612738       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:05:49.612772       1 main.go:299] handling current node
	I1004 03:05:59.609106       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:05:59.609138       1 main.go:299] handling current node
	I1004 03:06:09.609660       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:06:09.609696       1 main.go:299] handling current node
	I1004 03:06:19.609803       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:06:19.609834       1 main.go:299] handling current node
	I1004 03:06:29.609546       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:06:29.609664       1 main.go:299] handling current node
	I1004 03:06:39.612535       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:06:39.612569       1 main.go:299] handling current node
	I1004 03:06:49.615041       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:06:49.615158       1 main.go:299] handling current node
	I1004 03:06:59.609776       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:06:59.609893       1 main.go:299] handling current node
	
	
	==> kube-apiserver [94872964dd2482cf69075d5da2ba039a75dafcdea7cc6d04c7cee37af31d0bd4] <==
	E1004 02:50:50.492702       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.7.145:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.7.145:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.7.145:443: connect: connection refused" logger="UnhandledError"
	E1004 02:50:50.499329       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.7.145:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.7.145:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.7.145:443: connect: connection refused" logger="UnhandledError"
	I1004 02:50:50.774461       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1004 03:00:06.852413       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1004 03:00:23.529727       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 03:00:23.529800       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 03:00:23.630409       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 03:00:23.630472       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 03:00:23.687414       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 03:00:23.687464       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 03:00:23.726639       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 03:00:23.726685       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 03:00:23.805982       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 03:00:23.807304       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1004 03:00:24.729523       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1004 03:00:24.806669       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1004 03:00:24.825919       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1004 03:00:38.457257       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.231.221"}
	E1004 03:00:41.376517       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1004 03:01:15.431320       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1004 03:01:16.468283       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1004 03:01:20.997346       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1004 03:01:21.325261       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.23.145"}
	I1004 03:03:41.328362       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.55.205"}
	E1004 03:03:46.007420       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [6ae364e85e983c9890233d8f0bc90be79ea7b308cd10a4d9e693395edc5cbb10] <==
	E1004 03:04:48.686546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:04:55.110929       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:04:55.110971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:05:00.775230       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:05:00.775275       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:05:23.840601       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:05:23.840647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:05:25.871727       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:05:25.871777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:05:36.740596       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:05:36.740643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:05:36.747230       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:05:36.747268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:05:55.787989       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:05:55.788032       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:06:08.300403       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:06:08.300448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:06:27.873670       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:06:27.873797       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:06:28.011970       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:06:28.012014       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:06:35.536995       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:06:35.537113       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:06:50.023946       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:06:50.024012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [c090785615f896cc273e87900c984e08c06c2ee480560c24d86715508d23903f] <==
	I1004 02:48:59.709398       1 server_linux.go:66] "Using iptables proxy"
	I1004 02:49:00.163183       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1004 02:49:00.165551       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 02:49:00.230608       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1004 02:49:00.230743       1 server_linux.go:169] "Using iptables Proxier"
	I1004 02:49:00.233031       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 02:49:00.233700       1 server.go:483] "Version info" version="v1.31.1"
	I1004 02:49:00.233785       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 02:49:00.235594       1 config.go:199] "Starting service config controller"
	I1004 02:49:00.235705       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 02:49:00.235778       1 config.go:105] "Starting endpoint slice config controller"
	I1004 02:49:00.235813       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 02:49:00.236798       1 config.go:328] "Starting node config controller"
	I1004 02:49:00.236911       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 02:49:00.341187       1 shared_informer.go:320] Caches are synced for node config
	I1004 02:49:00.341275       1 shared_informer.go:320] Caches are synced for service config
	I1004 02:49:00.341313       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [170502ec13419dd7bc954f17360eb6f9574c4363488375290f7a4aae46bb6aab] <==
	W1004 02:48:47.685626       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 02:48:47.685638       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.685703       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 02:48:47.685714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.685756       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 02:48:47.685766       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.686056       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 02:48:47.686090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.686326       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 02:48:47.686347       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.686450       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 02:48:47.686484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.686532       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 02:48:47.686558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.686637       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 02:48:47.686655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.686729       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 02:48:47.686746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.686847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 02:48:47.686875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.686893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 02:48:47.686906       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.687812       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 02:48:47.687851       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1004 02:48:49.277430       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 03:05:09 addons-561541 kubelet[1504]: E1004 03:05:09.331060    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011109330822433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:05:09 addons-561541 kubelet[1504]: E1004 03:05:09.331094    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011109330822433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:05:19 addons-561541 kubelet[1504]: E1004 03:05:19.333591    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011119333355704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:05:19 addons-561541 kubelet[1504]: E1004 03:05:19.333634    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011119333355704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:05:29 addons-561541 kubelet[1504]: E1004 03:05:29.336414    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011129336186231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:05:29 addons-561541 kubelet[1504]: E1004 03:05:29.336460    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011129336186231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:05:39 addons-561541 kubelet[1504]: E1004 03:05:39.339070    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011139338842575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:05:39 addons-561541 kubelet[1504]: E1004 03:05:39.339111    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011139338842575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:05:49 addons-561541 kubelet[1504]: E1004 03:05:49.342260    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011149342033988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:05:49 addons-561541 kubelet[1504]: E1004 03:05:49.342300    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011149342033988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:05:59 addons-561541 kubelet[1504]: E1004 03:05:59.344509    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011159344285861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:05:59 addons-561541 kubelet[1504]: E1004 03:05:59.344546    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011159344285861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:06:09 addons-561541 kubelet[1504]: E1004 03:06:09.347074    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011169346835340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:06:09 addons-561541 kubelet[1504]: E1004 03:06:09.347112    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011169346835340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:06:19 addons-561541 kubelet[1504]: E1004 03:06:19.349323    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011179349060605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:06:19 addons-561541 kubelet[1504]: E1004 03:06:19.349362    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011179349060605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:06:28 addons-561541 kubelet[1504]: I1004 03:06:28.042716    1504 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 04 03:06:29 addons-561541 kubelet[1504]: E1004 03:06:29.352250    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011189351942570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:06:29 addons-561541 kubelet[1504]: E1004 03:06:29.352287    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011189351942570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:06:39 addons-561541 kubelet[1504]: E1004 03:06:39.355211    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011199354954169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:06:39 addons-561541 kubelet[1504]: E1004 03:06:39.355261    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011199354954169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:06:49 addons-561541 kubelet[1504]: E1004 03:06:49.358404    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011209358170212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:06:49 addons-561541 kubelet[1504]: E1004 03:06:49.358445    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011209358170212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:06:59 addons-561541 kubelet[1504]: E1004 03:06:59.361067    1504 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011219360851618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:06:59 addons-561541 kubelet[1504]: E1004 03:06:59.361102    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011219360851618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:608023,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [0d15cf46332c64d3e7a662fc0b4577dc8d495d7d97618c2a5c069605014065da] <==
	I1004 02:49:41.198898       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 02:49:41.221190       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 02:49:41.221345       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 02:49:41.242699       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 02:49:41.245549       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"41bdd897-b9f7-4fc8-98f5-b9ea8304c00f", APIVersion:"v1", ResourceVersion:"936", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-561541_be83665d-4dd2-47e0-9163-d59677258681 became leader
	I1004 02:49:41.245836       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-561541_be83665d-4dd2-47e0-9163-d59677258681!
	I1004 02:49:41.346253       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-561541_be83665d-4dd2-47e0-9163-d59677258681!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-561541 -n addons-561541
helpers_test.go:261: (dbg) Run:  kubectl --context addons-561541 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-561541 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (364.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (126.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-481241 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1004 03:21:26.675167    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-481241 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m1.765911834s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:591: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-481241       NotReady   control-plane   11m     v1.31.1
	ha-481241-m02   Ready      control-plane   10m     v1.31.1
	ha-481241-m04   Ready      <none>          8m24s   v1.31.1

                                                
                                                
-- /stdout --
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:599: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-481241
helpers_test.go:235: (dbg) docker inspect ha-481241:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "462f298eff4281286995e3854193dfffc664e65c1babc48f0a6c0308f78e6495",
	        "Created": "2024-10-04T03:11:38.587823553Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 73966,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-04T03:21:09.805530037Z",
	            "FinishedAt": "2024-10-04T03:21:09.037229638Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/462f298eff4281286995e3854193dfffc664e65c1babc48f0a6c0308f78e6495/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/462f298eff4281286995e3854193dfffc664e65c1babc48f0a6c0308f78e6495/hostname",
	        "HostsPath": "/var/lib/docker/containers/462f298eff4281286995e3854193dfffc664e65c1babc48f0a6c0308f78e6495/hosts",
	        "LogPath": "/var/lib/docker/containers/462f298eff4281286995e3854193dfffc664e65c1babc48f0a6c0308f78e6495/462f298eff4281286995e3854193dfffc664e65c1babc48f0a6c0308f78e6495-json.log",
	        "Name": "/ha-481241",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-481241:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-481241",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f8671a1521fd991f89c17a51a4619671553d79803f89d585c0f9ad82ef4df865-init/diff:/var/lib/docker/overlay2/113409e5ac8a20e78db05ebf8d2720874d391240a7f47648e5e10a2a0c89288f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f8671a1521fd991f89c17a51a4619671553d79803f89d585c0f9ad82ef4df865/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f8671a1521fd991f89c17a51a4619671553d79803f89d585c0f9ad82ef4df865/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f8671a1521fd991f89c17a51a4619671553d79803f89d585c0f9ad82ef4df865/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-481241",
	                "Source": "/var/lib/docker/volumes/ha-481241/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-481241",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-481241",
	                "name.minikube.sigs.k8s.io": "ha-481241",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b7946fff1a00d414b203cb1bddc4b81eb8f2704292b6d55bd61fbfe7a3ae02c5",
	            "SandboxKey": "/var/run/docker/netns/b7946fff1a00",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-481241": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c36a525dd0a72eaa2d335db058f031dc90bd1bb21f506692d0bc38c179c1e55f",
	                    "EndpointID": "50eac6c2dbd6178a3ba55fcddadef1c45720a8157ed19cfd6e128430b638ef6e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-481241",
	                        "462f298eff42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-481241 -n ha-481241
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ha-481241 logs -n 25: (2.044794605s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-481241 cp ha-481241-m03:/home/docker/cp-test.txt                              | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:15 UTC | 04 Oct 24 03:15 UTC |
	|         | ha-481241-m04:/home/docker/cp-test_ha-481241-m03_ha-481241-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-481241 ssh -n                                                                 | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:15 UTC | 04 Oct 24 03:15 UTC |
	|         | ha-481241-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-481241 ssh -n ha-481241-m04 sudo cat                                          | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:15 UTC | 04 Oct 24 03:15 UTC |
	|         | /home/docker/cp-test_ha-481241-m03_ha-481241-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-481241 cp testdata/cp-test.txt                                                | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:15 UTC | 04 Oct 24 03:15 UTC |
	|         | ha-481241-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-481241 ssh -n                                                                 | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:15 UTC | 04 Oct 24 03:15 UTC |
	|         | ha-481241-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-481241 cp ha-481241-m04:/home/docker/cp-test.txt                              | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:15 UTC | 04 Oct 24 03:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1073171230/001/cp-test_ha-481241-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-481241 ssh -n                                                                 | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:15 UTC | 04 Oct 24 03:15 UTC |
	|         | ha-481241-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-481241 cp ha-481241-m04:/home/docker/cp-test.txt                              | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:15 UTC | 04 Oct 24 03:15 UTC |
	|         | ha-481241:/home/docker/cp-test_ha-481241-m04_ha-481241.txt                       |           |         |         |                     |                     |
	| ssh     | ha-481241 ssh -n                                                                 | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:15 UTC | 04 Oct 24 03:15 UTC |
	|         | ha-481241-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-481241 ssh -n ha-481241 sudo cat                                              | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:15 UTC | 04 Oct 24 03:15 UTC |
	|         | /home/docker/cp-test_ha-481241-m04_ha-481241.txt                                 |           |         |         |                     |                     |
	| cp      | ha-481241 cp ha-481241-m04:/home/docker/cp-test.txt                              | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:15 UTC | 04 Oct 24 03:15 UTC |
	|         | ha-481241-m02:/home/docker/cp-test_ha-481241-m04_ha-481241-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-481241 ssh -n                                                                 | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:15 UTC | 04 Oct 24 03:15 UTC |
	|         | ha-481241-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-481241 ssh -n ha-481241-m02 sudo cat                                          | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:15 UTC | 04 Oct 24 03:15 UTC |
	|         | /home/docker/cp-test_ha-481241-m04_ha-481241-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-481241 cp ha-481241-m04:/home/docker/cp-test.txt                              | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:15 UTC | 04 Oct 24 03:15 UTC |
	|         | ha-481241-m03:/home/docker/cp-test_ha-481241-m04_ha-481241-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-481241 ssh -n                                                                 | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:15 UTC | 04 Oct 24 03:15 UTC |
	|         | ha-481241-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-481241 ssh -n ha-481241-m03 sudo cat                                          | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:15 UTC | 04 Oct 24 03:15 UTC |
	|         | /home/docker/cp-test_ha-481241-m04_ha-481241-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-481241 node stop m02 -v=7                                                     | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:15 UTC | 04 Oct 24 03:16 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-481241 node start m02 -v=7                                                    | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:16 UTC | 04 Oct 24 03:16 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-481241 -v=7                                                           | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-481241 -v=7                                                                | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:16 UTC | 04 Oct 24 03:17 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-481241 --wait=true -v=7                                                    | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:17 UTC | 04 Oct 24 03:20 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-481241                                                                | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:20 UTC |                     |
	| node    | ha-481241 node delete m03 -v=7                                                   | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:20 UTC | 04 Oct 24 03:20 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-481241 stop -v=7                                                              | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:20 UTC | 04 Oct 24 03:21 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-481241 --wait=true                                                         | ha-481241 | jenkins | v1.34.0 | 04 Oct 24 03:21 UTC | 04 Oct 24 03:23 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 03:21:09
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 03:21:09.456317   73775 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:21:09.456521   73775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:21:09.456552   73775 out.go:358] Setting ErrFile to fd 2...
	I1004 03:21:09.456573   73775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:21:09.456831   73775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-2238/.minikube/bin
	I1004 03:21:09.457254   73775 out.go:352] Setting JSON to false
	I1004 03:21:09.458112   73775 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3815,"bootTime":1728008255,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1004 03:21:09.458212   73775 start.go:139] virtualization:  
	I1004 03:21:09.461494   73775 out.go:177] * [ha-481241] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1004 03:21:09.464812   73775 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:21:09.464866   73775 notify.go:220] Checking for updates...
	I1004 03:21:09.470146   73775 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:21:09.472749   73775 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-2238/kubeconfig
	I1004 03:21:09.475287   73775 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-2238/.minikube
	I1004 03:21:09.477904   73775 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1004 03:21:09.480469   73775 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:21:09.483515   73775 config.go:182] Loaded profile config "ha-481241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:21:09.484062   73775 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:21:09.510218   73775 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1004 03:21:09.510358   73775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:21:09.568149   73775 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:41 SystemTime:2024-10-04 03:21:09.558590009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:21:09.568263   73775 docker.go:318] overlay module found
	I1004 03:21:09.571119   73775 out.go:177] * Using the docker driver based on existing profile
	I1004 03:21:09.573817   73775 start.go:297] selected driver: docker
	I1004 03:21:09.573839   73775 start.go:901] validating driver "docker" against &{Name:ha-481241 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-481241 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logvi
ewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:21:09.573995   73775 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:21:09.574098   73775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:21:09.630967   73775 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:41 SystemTime:2024-10-04 03:21:09.621692379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:21:09.631466   73775 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:21:09.631496   73775 cni.go:84] Creating CNI manager for ""
	I1004 03:21:09.631542   73775 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1004 03:21:09.631599   73775 start.go:340] cluster config:
	{Name:ha-481241 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-481241 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvi
dia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:21:09.634709   73775 out.go:177] * Starting "ha-481241" primary control-plane node in "ha-481241" cluster
	I1004 03:21:09.637251   73775 cache.go:121] Beginning downloading kic base image for docker with crio
	I1004 03:21:09.639811   73775 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1004 03:21:09.642286   73775 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:21:09.642339   73775 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1004 03:21:09.642351   73775 cache.go:56] Caching tarball of preloaded images
	I1004 03:21:09.642380   73775 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1004 03:21:09.642435   73775 preload.go:172] Found /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1004 03:21:09.642446   73775 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:21:09.642598   73775 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/config.json ...
	I1004 03:21:09.661192   73775 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1004 03:21:09.661229   73775 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1004 03:21:09.661246   73775 cache.go:194] Successfully downloaded all kic artifacts
	I1004 03:21:09.661278   73775 start.go:360] acquireMachinesLock for ha-481241: {Name:mke6d57b3bfe24ccb6e1636ec98a3cc79c723f18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:21:09.661353   73775 start.go:364] duration metric: took 38.44µs to acquireMachinesLock for "ha-481241"
	I1004 03:21:09.661379   73775 start.go:96] Skipping create...Using existing machine configuration
	I1004 03:21:09.661385   73775 fix.go:54] fixHost starting: 
	I1004 03:21:09.661647   73775 cli_runner.go:164] Run: docker container inspect ha-481241 --format={{.State.Status}}
	I1004 03:21:09.677927   73775 fix.go:112] recreateIfNeeded on ha-481241: state=Stopped err=<nil>
	W1004 03:21:09.677975   73775 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 03:21:09.681001   73775 out.go:177] * Restarting existing docker container for "ha-481241" ...
	I1004 03:21:09.683753   73775 cli_runner.go:164] Run: docker start ha-481241
	I1004 03:21:09.966811   73775 cli_runner.go:164] Run: docker container inspect ha-481241 --format={{.State.Status}}
	I1004 03:21:09.988046   73775 kic.go:430] container "ha-481241" state is running.
	I1004 03:21:09.989614   73775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481241
	I1004 03:21:10.013655   73775 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/config.json ...
	I1004 03:21:10.013901   73775 machine.go:93] provisionDockerMachine start ...
	I1004 03:21:10.013962   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241
	I1004 03:21:10.032798   73775 main.go:141] libmachine: Using SSH client type: native
	I1004 03:21:10.033126   73775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1004 03:21:10.033144   73775 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 03:21:10.034001   73775 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1004 03:21:13.172599   73775 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481241
	
	I1004 03:21:13.172626   73775 ubuntu.go:169] provisioning hostname "ha-481241"
	I1004 03:21:13.172713   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241
	I1004 03:21:13.190112   73775 main.go:141] libmachine: Using SSH client type: native
	I1004 03:21:13.190367   73775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1004 03:21:13.190385   73775 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481241 && echo "ha-481241" | sudo tee /etc/hostname
	I1004 03:21:13.340841   73775 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481241
	
	I1004 03:21:13.340937   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241
	I1004 03:21:13.359188   73775 main.go:141] libmachine: Using SSH client type: native
	I1004 03:21:13.359441   73775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1004 03:21:13.359457   73775 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481241' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481241/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481241' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:21:13.497226   73775 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:21:13.497254   73775 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19546-2238/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-2238/.minikube}
	I1004 03:21:13.497280   73775 ubuntu.go:177] setting up certificates
	I1004 03:21:13.497297   73775 provision.go:84] configureAuth start
	I1004 03:21:13.497363   73775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481241
	I1004 03:21:13.514044   73775 provision.go:143] copyHostCerts
	I1004 03:21:13.514086   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19546-2238/.minikube/ca.pem
	I1004 03:21:13.514119   73775 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-2238/.minikube/ca.pem, removing ...
	I1004 03:21:13.514130   73775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.pem
	I1004 03:21:13.514208   73775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/ca.pem (1082 bytes)
	I1004 03:21:13.514328   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19546-2238/.minikube/cert.pem
	I1004 03:21:13.514349   73775 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-2238/.minikube/cert.pem, removing ...
	I1004 03:21:13.514354   73775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-2238/.minikube/cert.pem
	I1004 03:21:13.514383   73775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/cert.pem (1123 bytes)
	I1004 03:21:13.514436   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19546-2238/.minikube/key.pem
	I1004 03:21:13.514456   73775 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-2238/.minikube/key.pem, removing ...
	I1004 03:21:13.514461   73775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-2238/.minikube/key.pem
	I1004 03:21:13.514491   73775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/key.pem (1679 bytes)
	I1004 03:21:13.514617   73775 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem org=jenkins.ha-481241 san=[127.0.0.1 192.168.49.2 ha-481241 localhost minikube]
	I1004 03:21:13.775340   73775 provision.go:177] copyRemoteCerts
	I1004 03:21:13.775435   73775 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:21:13.775491   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241
	I1004 03:21:13.791687   73775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/ha-481241/id_rsa Username:docker}
	I1004 03:21:13.886840   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 03:21:13.886909   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:21:13.913040   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 03:21:13.913101   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1004 03:21:13.939921   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 03:21:13.939993   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 03:21:13.965116   73775 provision.go:87] duration metric: took 467.796327ms to configureAuth
	I1004 03:21:13.965142   73775 ubuntu.go:193] setting minikube options for container-runtime
	I1004 03:21:13.965389   73775 config.go:182] Loaded profile config "ha-481241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:21:13.965510   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241
	I1004 03:21:13.982816   73775 main.go:141] libmachine: Using SSH client type: native
	I1004 03:21:13.983059   73775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1004 03:21:13.983081   73775 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:21:14.466246   73775 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:21:14.466309   73775 machine.go:96] duration metric: took 4.452397852s to provisionDockerMachine
	I1004 03:21:14.466334   73775 start.go:293] postStartSetup for "ha-481241" (driver="docker")
	I1004 03:21:14.466360   73775 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:21:14.466450   73775 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:21:14.466518   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241
	I1004 03:21:14.487633   73775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/ha-481241/id_rsa Username:docker}
	I1004 03:21:14.586180   73775 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:21:14.589433   73775 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1004 03:21:14.589471   73775 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1004 03:21:14.589482   73775 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1004 03:21:14.589490   73775 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1004 03:21:14.589501   73775 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-2238/.minikube/addons for local assets ...
	I1004 03:21:14.589561   73775 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-2238/.minikube/files for local assets ...
	I1004 03:21:14.589641   73775 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem -> 75602.pem in /etc/ssl/certs
	I1004 03:21:14.589654   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem -> /etc/ssl/certs/75602.pem
	I1004 03:21:14.589756   73775 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:21:14.598252   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem --> /etc/ssl/certs/75602.pem (1708 bytes)
	I1004 03:21:14.621460   73775 start.go:296] duration metric: took 155.097999ms for postStartSetup
	I1004 03:21:14.621537   73775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 03:21:14.621588   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241
	I1004 03:21:14.637286   73775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/ha-481241/id_rsa Username:docker}
	I1004 03:21:14.729891   73775 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1004 03:21:14.733965   73775 fix.go:56] duration metric: took 5.072573168s for fixHost
	I1004 03:21:14.733992   73775 start.go:83] releasing machines lock for "ha-481241", held for 5.07262376s
	I1004 03:21:14.734073   73775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481241
	I1004 03:21:14.751624   73775 ssh_runner.go:195] Run: cat /version.json
	I1004 03:21:14.751678   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241
	I1004 03:21:14.751709   73775 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:21:14.751765   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241
	I1004 03:21:14.768653   73775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/ha-481241/id_rsa Username:docker}
	I1004 03:21:14.769778   73775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/ha-481241/id_rsa Username:docker}
	I1004 03:21:14.861271   73775 ssh_runner.go:195] Run: systemctl --version
	I1004 03:21:14.992618   73775 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:21:15.135480   73775 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1004 03:21:15.140779   73775 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:21:15.150882   73775 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1004 03:21:15.151033   73775 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:21:15.162162   73775 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1004 03:21:15.162189   73775 start.go:495] detecting cgroup driver to use...
	I1004 03:21:15.162224   73775 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1004 03:21:15.162276   73775 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:21:15.175112   73775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:21:15.187802   73775 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:21:15.187926   73775 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:21:15.201227   73775 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:21:15.213296   73775 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:21:15.294336   73775 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:21:15.390140   73775 docker.go:233] disabling docker service ...
	I1004 03:21:15.390212   73775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:21:15.403852   73775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:21:15.416015   73775 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:21:15.508585   73775 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:21:15.594748   73775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:21:15.607015   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:21:15.624577   73775 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:21:15.624657   73775 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:21:15.635046   73775 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:21:15.635125   73775 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:21:15.644951   73775 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:21:15.654944   73775 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:21:15.665153   73775 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:21:15.674506   73775 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:21:15.684351   73775 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:21:15.694341   73775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:21:15.704295   73775 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:21:15.713029   73775 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:21:15.721619   73775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:21:15.808527   73775 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:21:15.936531   73775 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:21:15.936677   73775 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:21:15.940364   73775 start.go:563] Will wait 60s for crictl version
	I1004 03:21:15.940479   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:21:15.943899   73775 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:21:15.980449   73775 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1004 03:21:15.980603   73775 ssh_runner.go:195] Run: crio --version
	I1004 03:21:16.021360   73775 ssh_runner.go:195] Run: crio --version
	I1004 03:21:16.062012   73775 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1004 03:21:16.064502   73775 cli_runner.go:164] Run: docker network inspect ha-481241 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1004 03:21:16.078895   73775 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1004 03:21:16.082662   73775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:21:16.093763   73775 kubeadm.go:883] updating cluster {Name:ha-481241 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-481241 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false me
tallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 03:21:16.093927   73775 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:21:16.093994   73775 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:21:16.146969   73775 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 03:21:16.146993   73775 crio.go:433] Images already preloaded, skipping extraction
	I1004 03:21:16.147050   73775 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:21:16.183490   73775 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 03:21:16.183513   73775 cache_images.go:84] Images are preloaded, skipping loading
	I1004 03:21:16.183521   73775 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I1004 03:21:16.183626   73775 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481241 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-481241 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:21:16.183711   73775 ssh_runner.go:195] Run: crio config
	I1004 03:21:16.250701   73775 cni.go:84] Creating CNI manager for ""
	I1004 03:21:16.250727   73775 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1004 03:21:16.250738   73775 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 03:21:16.250771   73775 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481241 NodeName:ha-481241 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 03:21:16.250937   73775 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481241"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 03:21:16.250958   73775 kube-vip.go:115] generating kube-vip config ...
	I1004 03:21:16.251021   73775 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1004 03:21:16.263086   73775 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1004 03:21:16.263200   73775 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1004 03:21:16.263263   73775 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:21:16.272046   73775 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 03:21:16.272120   73775 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1004 03:21:16.281109   73775 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1004 03:21:16.299739   73775 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:21:16.318280   73775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I1004 03:21:16.336646   73775 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1004 03:21:16.354516   73775 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1004 03:21:16.357936   73775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:21:16.368541   73775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:21:16.459939   73775 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:21:16.474704   73775 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241 for IP: 192.168.49.2
	I1004 03:21:16.474727   73775 certs.go:194] generating shared ca certs ...
	I1004 03:21:16.474746   73775 certs.go:226] acquiring lock for ca certs: {Name:mk468b07ab6620fd74cefc3667e1a8643008ce5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:21:16.474886   73775 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key
	I1004 03:21:16.474932   73775 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key
	I1004 03:21:16.474943   73775 certs.go:256] generating profile certs ...
	I1004 03:21:16.475023   73775 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/client.key
	I1004 03:21:16.475054   73775 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/apiserver.key.5d211b5a
	I1004 03:21:16.475072   73775 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/apiserver.crt.5d211b5a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1004 03:21:17.375589   73775 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/apiserver.crt.5d211b5a ...
	I1004 03:21:17.375623   73775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/apiserver.crt.5d211b5a: {Name:mk5561ad891f8cf4d0139b0721815633435f8590 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:21:17.375829   73775 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/apiserver.key.5d211b5a ...
	I1004 03:21:17.375848   73775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/apiserver.key.5d211b5a: {Name:mkd71d1ff4d480a0715337644db335ec9e2ea004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:21:17.375934   73775 certs.go:381] copying /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/apiserver.crt.5d211b5a -> /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/apiserver.crt
	I1004 03:21:17.376084   73775 certs.go:385] copying /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/apiserver.key.5d211b5a -> /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/apiserver.key
	I1004 03:21:17.376224   73775 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/proxy-client.key
	I1004 03:21:17.376242   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 03:21:17.376258   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 03:21:17.376278   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 03:21:17.376295   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 03:21:17.376310   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 03:21:17.376324   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 03:21:17.376342   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 03:21:17.376357   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 03:21:17.376407   73775 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/7560.pem (1338 bytes)
	W1004 03:21:17.376442   73775 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-2238/.minikube/certs/7560_empty.pem, impossibly tiny 0 bytes
	I1004 03:21:17.376456   73775 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:21:17.376482   73775 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:21:17.376508   73775 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:21:17.376531   73775 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem (1679 bytes)
	I1004 03:21:17.376577   73775 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem (1708 bytes)
	I1004 03:21:17.376609   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:21:17.376624   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/7560.pem -> /usr/share/ca-certificates/7560.pem
	I1004 03:21:17.376635   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem -> /usr/share/ca-certificates/75602.pem
	I1004 03:21:17.377284   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:21:17.401493   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 03:21:17.424983   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:21:17.448385   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 03:21:17.472674   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1004 03:21:17.497101   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 03:21:17.521054   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:21:17.545263   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:21:17.569964   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:21:17.594094   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/certs/7560.pem --> /usr/share/ca-certificates/7560.pem (1338 bytes)
	I1004 03:21:17.618533   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem --> /usr/share/ca-certificates/75602.pem (1708 bytes)
	I1004 03:21:17.642462   73775 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 03:21:17.660065   73775 ssh_runner.go:195] Run: openssl version
	I1004 03:21:17.665699   73775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:21:17.675067   73775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:21:17.678494   73775 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:48 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:21:17.678564   73775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:21:17.685195   73775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:21:17.694034   73775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7560.pem && ln -fs /usr/share/ca-certificates/7560.pem /etc/ssl/certs/7560.pem"
	I1004 03:21:17.703119   73775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7560.pem
	I1004 03:21:17.706561   73775 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/7560.pem
	I1004 03:21:17.706626   73775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7560.pem
	I1004 03:21:17.713352   73775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7560.pem /etc/ssl/certs/51391683.0"
	I1004 03:21:17.721851   73775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75602.pem && ln -fs /usr/share/ca-certificates/75602.pem /etc/ssl/certs/75602.pem"
	I1004 03:21:17.730950   73775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75602.pem
	I1004 03:21:17.734350   73775 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/75602.pem
	I1004 03:21:17.734435   73775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75602.pem
	I1004 03:21:17.741130   73775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75602.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:21:17.749958   73775 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:21:17.753354   73775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 03:21:17.759803   73775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 03:21:17.766666   73775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 03:21:17.773229   73775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 03:21:17.779760   73775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 03:21:17.786484   73775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 03:21:17.793037   73775 kubeadm.go:392] StartCluster: {Name:ha-481241 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-481241 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metal
lb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:21:17.793168   73775 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 03:21:17.793257   73775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 03:21:17.829100   73775 cri.go:89] found id: ""
	I1004 03:21:17.829171   73775 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 03:21:17.838163   73775 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 03:21:17.838185   73775 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 03:21:17.838258   73775 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 03:21:17.846749   73775 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 03:21:17.847205   73775 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-481241" does not appear in /home/jenkins/minikube-integration/19546-2238/kubeconfig
	I1004 03:21:17.847323   73775 kubeconfig.go:62] /home/jenkins/minikube-integration/19546-2238/kubeconfig needs updating (will repair): [kubeconfig missing "ha-481241" cluster setting kubeconfig missing "ha-481241" context setting]
	I1004 03:21:17.847598   73775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/kubeconfig: {Name:mkd1a87175976669e9a14c51acaef20b883a2130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:21:17.848028   73775 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19546-2238/kubeconfig
	I1004 03:21:17.848305   73775 kapi.go:59] client config for ha-481241: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/client.crt", KeyFile:"/home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/client.key", CAFile:"/home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a17550), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 03:21:17.848956   73775 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 03:21:17.849051   73775 cert_rotation.go:140] Starting client certificate rotation controller
	I1004 03:21:17.857905   73775 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I1004 03:21:17.857927   73775 kubeadm.go:597] duration metric: took 19.735791ms to restartPrimaryControlPlane
	I1004 03:21:17.857936   73775 kubeadm.go:394] duration metric: took 64.906881ms to StartCluster
	I1004 03:21:17.857952   73775 settings.go:142] acquiring lock: {Name:mk9c80036423f55b2143f3dcbc4f16f5b78f75ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:21:17.858016   73775 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-2238/kubeconfig
	I1004 03:21:17.858648   73775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/kubeconfig: {Name:mkd1a87175976669e9a14c51acaef20b883a2130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:21:17.858846   73775 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:21:17.858873   73775 start.go:241] waiting for startup goroutines ...
	I1004 03:21:17.858889   73775 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 03:21:17.859317   73775 config.go:182] Loaded profile config "ha-481241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:21:17.862801   73775 out.go:177] * Enabled addons: 
	I1004 03:21:17.865423   73775 addons.go:510] duration metric: took 6.535846ms for enable addons: enabled=[]
	I1004 03:21:17.865459   73775 start.go:246] waiting for cluster config update ...
	I1004 03:21:17.865483   73775 start.go:255] writing updated cluster config ...
	I1004 03:21:17.868503   73775 out.go:201] 
	I1004 03:21:17.871424   73775 config.go:182] Loaded profile config "ha-481241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:21:17.871544   73775 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/config.json ...
	I1004 03:21:17.874403   73775 out.go:177] * Starting "ha-481241-m02" control-plane node in "ha-481241" cluster
	I1004 03:21:17.876881   73775 cache.go:121] Beginning downloading kic base image for docker with crio
	I1004 03:21:17.879416   73775 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1004 03:21:17.881869   73775 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:21:17.881893   73775 cache.go:56] Caching tarball of preloaded images
	I1004 03:21:17.881955   73775 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1004 03:21:17.881987   73775 preload.go:172] Found /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1004 03:21:17.882047   73775 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:21:17.882179   73775 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/config.json ...
	I1004 03:21:17.899434   73775 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1004 03:21:17.899453   73775 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1004 03:21:17.899466   73775 cache.go:194] Successfully downloaded all kic artifacts
	I1004 03:21:17.899490   73775 start.go:360] acquireMachinesLock for ha-481241-m02: {Name:mke05d62fffc03d10e2183e31e32edb6693cc273 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:21:17.899541   73775 start.go:364] duration metric: took 33.829µs to acquireMachinesLock for "ha-481241-m02"
	I1004 03:21:17.899567   73775 start.go:96] Skipping create...Using existing machine configuration
	I1004 03:21:17.899577   73775 fix.go:54] fixHost starting: m02
	I1004 03:21:17.899826   73775 cli_runner.go:164] Run: docker container inspect ha-481241-m02 --format={{.State.Status}}
	I1004 03:21:17.915838   73775 fix.go:112] recreateIfNeeded on ha-481241-m02: state=Stopped err=<nil>
	W1004 03:21:17.915868   73775 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 03:21:17.919292   73775 out.go:177] * Restarting existing docker container for "ha-481241-m02" ...
	I1004 03:21:17.921890   73775 cli_runner.go:164] Run: docker start ha-481241-m02
	I1004 03:21:18.204642   73775 cli_runner.go:164] Run: docker container inspect ha-481241-m02 --format={{.State.Status}}
	I1004 03:21:18.228597   73775 kic.go:430] container "ha-481241-m02" state is running.
	I1004 03:21:18.229036   73775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481241-m02
	I1004 03:21:18.255067   73775 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/config.json ...
	I1004 03:21:18.255317   73775 machine.go:93] provisionDockerMachine start ...
	I1004 03:21:18.255386   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241-m02
	I1004 03:21:18.278399   73775 main.go:141] libmachine: Using SSH client type: native
	I1004 03:21:18.278633   73775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1004 03:21:18.278650   73775 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 03:21:18.279263   73775 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57510->127.0.0.1:32833: read: connection reset by peer
	I1004 03:21:21.422036   73775 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481241-m02
	
	I1004 03:21:21.422058   73775 ubuntu.go:169] provisioning hostname "ha-481241-m02"
	I1004 03:21:21.422122   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241-m02
	I1004 03:21:21.445151   73775 main.go:141] libmachine: Using SSH client type: native
	I1004 03:21:21.445425   73775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1004 03:21:21.445438   73775 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481241-m02 && echo "ha-481241-m02" | sudo tee /etc/hostname
	I1004 03:21:21.603173   73775 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481241-m02
	
	I1004 03:21:21.603335   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241-m02
	I1004 03:21:21.627352   73775 main.go:141] libmachine: Using SSH client type: native
	I1004 03:21:21.627689   73775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1004 03:21:21.627709   73775 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481241-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481241-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481241-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:21:21.770022   73775 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:21:21.770047   73775 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19546-2238/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-2238/.minikube}
	I1004 03:21:21.770065   73775 ubuntu.go:177] setting up certificates
	I1004 03:21:21.770075   73775 provision.go:84] configureAuth start
	I1004 03:21:21.770168   73775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481241-m02
	I1004 03:21:21.801341   73775 provision.go:143] copyHostCerts
	I1004 03:21:21.801378   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19546-2238/.minikube/ca.pem
	I1004 03:21:21.801411   73775 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-2238/.minikube/ca.pem, removing ...
	I1004 03:21:21.801418   73775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.pem
	I1004 03:21:21.801507   73775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/ca.pem (1082 bytes)
	I1004 03:21:21.801598   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19546-2238/.minikube/cert.pem
	I1004 03:21:21.801620   73775 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-2238/.minikube/cert.pem, removing ...
	I1004 03:21:21.801626   73775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-2238/.minikube/cert.pem
	I1004 03:21:21.801655   73775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/cert.pem (1123 bytes)
	I1004 03:21:21.801700   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19546-2238/.minikube/key.pem
	I1004 03:21:21.801720   73775 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-2238/.minikube/key.pem, removing ...
	I1004 03:21:21.801724   73775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-2238/.minikube/key.pem
	I1004 03:21:21.801749   73775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/key.pem (1679 bytes)
	I1004 03:21:21.801799   73775 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem org=jenkins.ha-481241-m02 san=[127.0.0.1 192.168.49.3 ha-481241-m02 localhost minikube]
	I1004 03:21:22.387921   73775 provision.go:177] copyRemoteCerts
	I1004 03:21:22.387996   73775 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:21:22.388043   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241-m02
	I1004 03:21:22.405030   73775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/ha-481241-m02/id_rsa Username:docker}
	I1004 03:21:22.502638   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 03:21:22.502700   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 03:21:22.529718   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 03:21:22.529804   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:21:22.557800   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 03:21:22.557865   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1004 03:21:22.585013   73775 provision.go:87] duration metric: took 814.92532ms to configureAuth
	I1004 03:21:22.585040   73775 ubuntu.go:193] setting minikube options for container-runtime
	I1004 03:21:22.585288   73775 config.go:182] Loaded profile config "ha-481241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:21:22.585395   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241-m02
	I1004 03:21:22.609317   73775 main.go:141] libmachine: Using SSH client type: native
	I1004 03:21:22.609557   73775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1004 03:21:22.609579   73775 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:21:22.990034   73775 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:21:22.990055   73775 machine.go:96] duration metric: took 4.734721175s to provisionDockerMachine
	I1004 03:21:22.990067   73775 start.go:293] postStartSetup for "ha-481241-m02" (driver="docker")
	I1004 03:21:22.990079   73775 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:21:22.990141   73775 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:21:22.990184   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241-m02
	I1004 03:21:23.007449   73775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/ha-481241-m02/id_rsa Username:docker}
	I1004 03:21:23.106478   73775 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:21:23.110250   73775 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1004 03:21:23.110287   73775 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1004 03:21:23.110298   73775 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1004 03:21:23.110305   73775 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1004 03:21:23.110316   73775 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-2238/.minikube/addons for local assets ...
	I1004 03:21:23.110370   73775 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-2238/.minikube/files for local assets ...
	I1004 03:21:23.110449   73775 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem -> 75602.pem in /etc/ssl/certs
	I1004 03:21:23.110460   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem -> /etc/ssl/certs/75602.pem
	I1004 03:21:23.110559   73775 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:21:23.120697   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem --> /etc/ssl/certs/75602.pem (1708 bytes)
	I1004 03:21:23.154079   73775 start.go:296] duration metric: took 163.997146ms for postStartSetup
	I1004 03:21:23.154164   73775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 03:21:23.154212   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241-m02
	I1004 03:21:23.183013   73775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/ha-481241-m02/id_rsa Username:docker}
	I1004 03:21:23.351102   73775 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1004 03:21:23.371074   73775 fix.go:56] duration metric: took 5.471488547s for fixHost
	I1004 03:21:23.371099   73775 start.go:83] releasing machines lock for "ha-481241-m02", held for 5.471544399s
	I1004 03:21:23.371167   73775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481241-m02
	I1004 03:21:23.400706   73775 out.go:177] * Found network options:
	I1004 03:21:23.404017   73775 out.go:177]   - NO_PROXY=192.168.49.2
	W1004 03:21:23.421030   73775 proxy.go:119] fail to check proxy env: Error ip not in block
	W1004 03:21:23.421077   73775 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 03:21:23.421153   73775 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:21:23.421199   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241-m02
	I1004 03:21:23.421220   73775 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:21:23.421279   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241-m02
	I1004 03:21:23.470718   73775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/ha-481241-m02/id_rsa Username:docker}
	I1004 03:21:23.473564   73775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/ha-481241-m02/id_rsa Username:docker}
	I1004 03:21:24.062736   73775 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1004 03:21:24.083078   73775 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:21:24.110934   73775 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1004 03:21:24.111013   73775 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:21:24.154804   73775 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1004 03:21:24.154829   73775 start.go:495] detecting cgroup driver to use...
	I1004 03:21:24.154861   73775 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1004 03:21:24.154916   73775 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:21:24.209966   73775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:21:24.273672   73775 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:21:24.273739   73775 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:21:24.315638   73775 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:21:24.341574   73775 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:21:24.642478   73775 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:21:24.858608   73775 docker.go:233] disabling docker service ...
	I1004 03:21:24.858729   73775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:21:24.961227   73775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:21:25.005050   73775 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:21:25.290529   73775 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:21:25.588610   73775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:21:25.655466   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:21:25.726641   73775 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:21:25.726712   73775 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:21:25.785731   73775 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:21:25.785802   73775 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:21:25.837028   73775 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:21:25.873737   73775 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:21:25.891413   73775 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:21:25.931153   73775 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:21:25.956813   73775 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:21:25.994802   73775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:21:26.038550   73775 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:21:26.071630   73775 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:21:26.098819   73775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:21:26.402524   73775 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:21:27.831296   73775 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.42869388s)
	I1004 03:21:27.831321   73775 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:21:27.831374   73775 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:21:27.842601   73775 start.go:563] Will wait 60s for crictl version
	I1004 03:21:27.842667   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:21:27.851122   73775 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:21:27.961104   73775 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1004 03:21:27.961192   73775 ssh_runner.go:195] Run: crio --version
	I1004 03:21:28.061029   73775 ssh_runner.go:195] Run: crio --version
	I1004 03:21:28.137890   73775 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1004 03:21:28.140775   73775 out.go:177]   - env NO_PROXY=192.168.49.2
	I1004 03:21:28.143243   73775 cli_runner.go:164] Run: docker network inspect ha-481241 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1004 03:21:28.171227   73775 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1004 03:21:28.174858   73775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:21:28.185133   73775 mustload.go:65] Loading cluster: ha-481241
	I1004 03:21:28.185395   73775 config.go:182] Loaded profile config "ha-481241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:21:28.185664   73775 cli_runner.go:164] Run: docker container inspect ha-481241 --format={{.State.Status}}
	I1004 03:21:28.213367   73775 host.go:66] Checking if "ha-481241" exists ...
	I1004 03:21:28.213644   73775 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241 for IP: 192.168.49.3
	I1004 03:21:28.213658   73775 certs.go:194] generating shared ca certs ...
	I1004 03:21:28.213674   73775 certs.go:226] acquiring lock for ca certs: {Name:mk468b07ab6620fd74cefc3667e1a8643008ce5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:21:28.213793   73775 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key
	I1004 03:21:28.213837   73775 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key
	I1004 03:21:28.213848   73775 certs.go:256] generating profile certs ...
	I1004 03:21:28.213922   73775 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/client.key
	I1004 03:21:28.213992   73775 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/apiserver.key.a62d4a4a
	I1004 03:21:28.214038   73775 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/proxy-client.key
	I1004 03:21:28.214051   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 03:21:28.214066   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 03:21:28.214081   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 03:21:28.214101   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 03:21:28.214116   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 03:21:28.214129   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 03:21:28.214144   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 03:21:28.214155   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 03:21:28.214211   73775 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/7560.pem (1338 bytes)
	W1004 03:21:28.214244   73775 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-2238/.minikube/certs/7560_empty.pem, impossibly tiny 0 bytes
	I1004 03:21:28.214256   73775 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:21:28.214280   73775 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:21:28.214306   73775 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:21:28.214330   73775 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem (1679 bytes)
	I1004 03:21:28.214377   73775 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem (1708 bytes)
	I1004 03:21:28.214409   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:21:28.214426   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/7560.pem -> /usr/share/ca-certificates/7560.pem
	I1004 03:21:28.214443   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem -> /usr/share/ca-certificates/75602.pem
	I1004 03:21:28.214514   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241
	I1004 03:21:28.249425   73775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/ha-481241/id_rsa Username:docker}
	I1004 03:21:28.373525   73775 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1004 03:21:28.394003   73775 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1004 03:21:28.416974   73775 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1004 03:21:28.420514   73775 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1004 03:21:28.455031   73775 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1004 03:21:28.469396   73775 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1004 03:21:28.495663   73775 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1004 03:21:28.504055   73775 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1004 03:21:28.537312   73775 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1004 03:21:28.548352   73775 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1004 03:21:28.570861   73775 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1004 03:21:28.583309   73775 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1004 03:21:28.608751   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:21:28.646782   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 03:21:28.693629   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:21:28.731751   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 03:21:28.769257   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1004 03:21:28.810750   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 03:21:28.843189   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:21:28.889081   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:21:28.928242   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:21:28.965596   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/certs/7560.pem --> /usr/share/ca-certificates/7560.pem (1338 bytes)
	I1004 03:21:29.005749   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem --> /usr/share/ca-certificates/75602.pem (1708 bytes)
	I1004 03:21:29.042189   73775 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1004 03:21:29.073627   73775 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1004 03:21:29.101377   73775 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1004 03:21:29.133663   73775 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1004 03:21:29.166750   73775 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1004 03:21:29.195150   73775 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1004 03:21:29.223547   73775 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1004 03:21:29.250960   73775 ssh_runner.go:195] Run: openssl version
	I1004 03:21:29.256805   73775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:21:29.286590   73775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:21:29.290567   73775 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:48 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:21:29.290654   73775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:21:29.297656   73775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:21:29.306914   73775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7560.pem && ln -fs /usr/share/ca-certificates/7560.pem /etc/ssl/certs/7560.pem"
	I1004 03:21:29.322395   73775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7560.pem
	I1004 03:21:29.326415   73775 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/7560.pem
	I1004 03:21:29.326497   73775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7560.pem
	I1004 03:21:29.333842   73775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7560.pem /etc/ssl/certs/51391683.0"
	I1004 03:21:29.343647   73775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75602.pem && ln -fs /usr/share/ca-certificates/75602.pem /etc/ssl/certs/75602.pem"
	I1004 03:21:29.353787   73775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75602.pem
	I1004 03:21:29.357694   73775 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/75602.pem
	I1004 03:21:29.357780   73775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75602.pem
	I1004 03:21:29.365146   73775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75602.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:21:29.378967   73775 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:21:29.382678   73775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 03:21:29.393970   73775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 03:21:29.405810   73775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 03:21:29.415048   73775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 03:21:29.426610   73775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 03:21:29.441831   73775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 03:21:29.450733   73775 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.1 crio true true} ...
	I1004 03:21:29.450845   73775 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481241-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-481241 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:21:29.450883   73775 kube-vip.go:115] generating kube-vip config ...
	I1004 03:21:29.450929   73775 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1004 03:21:29.468723   73775 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1004 03:21:29.468798   73775 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1004 03:21:29.468867   73775 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:21:29.481677   73775 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 03:21:29.481758   73775 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1004 03:21:29.494456   73775 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1004 03:21:29.531872   73775 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:21:29.553785   73775 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1004 03:21:29.580431   73775 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1004 03:21:29.584126   73775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:21:29.600783   73775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:21:29.775395   73775 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:21:29.797781   73775 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:21:29.798072   73775 config.go:182] Loaded profile config "ha-481241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:21:29.802873   73775 out.go:177] * Verifying Kubernetes components...
	I1004 03:21:29.805663   73775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:21:29.944409   73775 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:21:29.962312   73775 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19546-2238/kubeconfig
	I1004 03:21:29.962610   73775 kapi.go:59] client config for ha-481241: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/client.crt", KeyFile:"/home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/client.key", CAFile:"/home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a17550), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1004 03:21:29.962681   73775 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1004 03:21:29.962933   73775 node_ready.go:35] waiting up to 6m0s for node "ha-481241-m02" to be "Ready" ...
	I1004 03:21:29.963037   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:21:29.963057   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:29.963067   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:29.963077   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:40.618625   73775 round_trippers.go:574] Response Status: 500 Internal Server Error in 10655 milliseconds
	I1004 03:21:40.624604   73775 node_ready.go:53] error getting node "ha-481241-m02": etcdserver: request timed out
	I1004 03:21:40.624675   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:21:40.624681   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:40.624690   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:40.624696   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:47.639404   73775 round_trippers.go:574] Response Status: 500 Internal Server Error in 7014 milliseconds
	I1004 03:21:47.639517   73775 node_ready.go:53] error getting node "ha-481241-m02": etcdserver: request timed out
	I1004 03:21:47.639580   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:21:47.639585   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:47.639593   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:47.639597   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:48.629797   73775 round_trippers.go:574] Response Status: 200 OK in 990 milliseconds
	I1004 03:21:48.631222   73775 node_ready.go:49] node "ha-481241-m02" has status "Ready":"True"
	I1004 03:21:48.631254   73775 node_ready.go:38] duration metric: took 18.668301479s for node "ha-481241-m02" to be "Ready" ...
	I1004 03:21:48.631266   73775 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:21:48.631305   73775 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1004 03:21:48.631317   73775 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1004 03:21:48.631378   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:48.631384   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:48.631391   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:48.631396   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:48.636273   73775 round_trippers.go:574] Response Status: 429 Too Many Requests in 4 milliseconds
	I1004 03:21:49.636712   73775 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:49.636763   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:49.636770   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:49.636778   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:49.636783   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:49.680374   73775 round_trippers.go:574] Response Status: 200 OK in 43 milliseconds
	I1004 03:21:49.695168   73775 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bmz2w" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:49.695821   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bmz2w
	I1004 03:21:49.695850   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:49.695874   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:49.695895   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:49.698886   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:49.699530   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:21:49.699540   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:49.699549   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:49.699555   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:49.701971   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:49.702896   73775 pod_ready.go:93] pod "coredns-7c65d6cfc9-bmz2w" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:49.702911   73775 pod_ready.go:82] duration metric: took 7.211636ms for pod "coredns-7c65d6cfc9-bmz2w" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:49.702921   73775 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-md2qq" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:49.702993   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:21:49.702999   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:49.703007   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:49.703013   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:49.705937   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:49.707034   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:21:49.707093   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:49.707116   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:49.707138   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:49.710215   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:49.711202   73775 pod_ready.go:93] pod "coredns-7c65d6cfc9-md2qq" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:49.711249   73775 pod_ready.go:82] duration metric: took 8.320223ms for pod "coredns-7c65d6cfc9-md2qq" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:49.711283   73775 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-481241" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:49.711381   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-481241
	I1004 03:21:49.711416   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:49.711438   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:49.711460   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:49.714560   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:49.715609   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:21:49.715661   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:49.715683   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:49.715706   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:49.718559   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:49.719514   73775 pod_ready.go:93] pod "etcd-ha-481241" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:49.719556   73775 pod_ready.go:82] duration metric: took 8.252966ms for pod "etcd-ha-481241" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:49.719589   73775 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-481241-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:49.719685   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-481241-m02
	I1004 03:21:49.719718   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:49.719741   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:49.719761   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:49.723098   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:49.724237   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:21:49.724285   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:49.724306   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:49.724327   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:49.727196   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:49.728193   73775 pod_ready.go:93] pod "etcd-ha-481241-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:49.728236   73775 pod_ready.go:82] duration metric: took 8.625047ms for pod "etcd-ha-481241-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:49.728275   73775 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-481241-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:49.728380   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-481241-m03
	I1004 03:21:49.728405   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:49.728437   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:49.728456   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:49.731465   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:49.836782   73775 request.go:632] Waited for 103.18493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241-m03
	I1004 03:21:49.836838   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m03
	I1004 03:21:49.836851   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:49.836860   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:49.836882   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:49.839390   73775 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1004 03:21:49.839748   73775 pod_ready.go:98] node "ha-481241-m03" hosting pod "etcd-ha-481241-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-481241-m03": nodes "ha-481241-m03" not found
	I1004 03:21:49.839789   73775 pod_ready.go:82] duration metric: took 111.488233ms for pod "etcd-ha-481241-m03" in "kube-system" namespace to be "Ready" ...
	E1004 03:21:49.839830   73775 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-481241-m03" hosting pod "etcd-ha-481241-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-481241-m03": nodes "ha-481241-m03" not found
	I1004 03:21:49.839869   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-481241" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:50.037101   73775 request.go:632] Waited for 197.12869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-481241
	I1004 03:21:50.037289   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-481241
	I1004 03:21:50.037318   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:50.037355   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:50.037375   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:50.040710   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:50.237561   73775 request.go:632] Waited for 183.634144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:21:50.237659   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:21:50.237682   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:50.237717   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:50.237741   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:50.242671   73775 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:50.243395   73775 pod_ready.go:93] pod "kube-apiserver-ha-481241" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:50.243443   73775 pod_ready.go:82] duration metric: took 403.55029ms for pod "kube-apiserver-ha-481241" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:50.243470   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-481241-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:50.436749   73775 request.go:632] Waited for 193.178472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-481241-m02
	I1004 03:21:50.436845   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-481241-m02
	I1004 03:21:50.436868   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:50.436904   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:50.436923   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:50.440387   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:50.636763   73775 request.go:632] Waited for 195.068988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:21:50.636868   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:21:50.636900   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:50.636925   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:50.636942   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:50.646729   73775 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1004 03:21:50.647746   73775 pod_ready.go:93] pod "kube-apiserver-ha-481241-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:50.647806   73775 pod_ready.go:82] duration metric: took 404.314867ms for pod "kube-apiserver-ha-481241-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:50.647834   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-481241-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:50.837184   73775 request.go:632] Waited for 189.257643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-481241-m03
	I1004 03:21:50.837291   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-481241-m03
	I1004 03:21:50.837311   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:50.837344   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:50.837367   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:50.840833   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:51.036808   73775 request.go:632] Waited for 195.183653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241-m03
	I1004 03:21:51.036906   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m03
	I1004 03:21:51.036933   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:51.036988   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:51.037007   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:51.041509   73775 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I1004 03:21:51.041707   73775 pod_ready.go:98] node "ha-481241-m03" hosting pod "kube-apiserver-ha-481241-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-481241-m03": nodes "ha-481241-m03" not found
	I1004 03:21:51.041770   73775 pod_ready.go:82] duration metric: took 393.888361ms for pod "kube-apiserver-ha-481241-m03" in "kube-system" namespace to be "Ready" ...
	E1004 03:21:51.041810   73775 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-481241-m03" hosting pod "kube-apiserver-ha-481241-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-481241-m03": nodes "ha-481241-m03" not found
	I1004 03:21:51.041836   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-481241" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:51.237285   73775 request.go:632] Waited for 195.3565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-481241
	I1004 03:21:51.237421   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-481241
	I1004 03:21:51.237446   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:51.237468   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:51.237491   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:51.241068   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:51.436805   73775 request.go:632] Waited for 194.219885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:21:51.436933   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:21:51.436964   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:51.436999   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:51.437032   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:51.440461   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:51.441456   73775 pod_ready.go:93] pod "kube-controller-manager-ha-481241" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:51.441508   73775 pod_ready.go:82] duration metric: took 399.641942ms for pod "kube-controller-manager-ha-481241" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:51.441534   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-481241-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:51.636793   73775 request.go:632] Waited for 195.165897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-481241-m02
	I1004 03:21:51.636896   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-481241-m02
	I1004 03:21:51.636931   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:51.636955   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:51.636975   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:51.639881   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:51.837118   73775 request.go:632] Waited for 196.093605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:21:51.837241   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:21:51.837271   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:51.837299   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:51.837354   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:51.840554   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:51.841172   73775 pod_ready.go:93] pod "kube-controller-manager-ha-481241-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:51.841250   73775 pod_ready.go:82] duration metric: took 399.693839ms for pod "kube-controller-manager-ha-481241-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:51.841280   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-481241-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:52.037169   73775 request.go:632] Waited for 195.809236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-481241-m03
	I1004 03:21:52.037305   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-481241-m03
	I1004 03:21:52.037318   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:52.037328   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:52.037342   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:52.040167   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:52.237692   73775 request.go:632] Waited for 196.332904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241-m03
	I1004 03:21:52.237766   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m03
	I1004 03:21:52.237777   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:52.237789   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:52.237798   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:52.240625   73775 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1004 03:21:52.240744   73775 pod_ready.go:98] node "ha-481241-m03" hosting pod "kube-controller-manager-ha-481241-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-481241-m03": nodes "ha-481241-m03" not found
	I1004 03:21:52.240765   73775 pod_ready.go:82] duration metric: took 399.464502ms for pod "kube-controller-manager-ha-481241-m03" in "kube-system" namespace to be "Ready" ...
	E1004 03:21:52.240776   73775 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-481241-m03" hosting pod "kube-controller-manager-ha-481241-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-481241-m03": nodes "ha-481241-m03" not found
	I1004 03:21:52.240789   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-25pr9" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:52.437047   73775 request.go:632] Waited for 196.181866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25pr9
	I1004 03:21:52.437149   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25pr9
	I1004 03:21:52.437165   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:52.437174   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:52.437179   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:52.439993   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:52.636785   73775 request.go:632] Waited for 196.118253ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241-m04
	I1004 03:21:52.636845   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m04
	I1004 03:21:52.636852   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:52.636862   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:52.636871   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:52.639556   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:52.640061   73775 pod_ready.go:93] pod "kube-proxy-25pr9" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:52.640083   73775 pod_ready.go:82] duration metric: took 399.285158ms for pod "kube-proxy-25pr9" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:52.640095   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7hdvx" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:52.837460   73775 request.go:632] Waited for 197.298691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7hdvx
	I1004 03:21:52.837525   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7hdvx
	I1004 03:21:52.837538   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:52.837548   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:52.837556   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:52.840806   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:53.037294   73775 request.go:632] Waited for 195.200343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241-m03
	I1004 03:21:53.037352   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m03
	I1004 03:21:53.037359   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:53.037372   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:53.037381   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:53.039902   73775 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1004 03:21:53.040023   73775 pod_ready.go:98] node "ha-481241-m03" hosting pod "kube-proxy-7hdvx" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-481241-m03": nodes "ha-481241-m03" not found
	I1004 03:21:53.040040   73775 pod_ready.go:82] duration metric: took 399.938818ms for pod "kube-proxy-7hdvx" in "kube-system" namespace to be "Ready" ...
	E1004 03:21:53.040058   73775 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-481241-m03" hosting pod "kube-proxy-7hdvx" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-481241-m03": nodes "ha-481241-m03" not found
	I1004 03:21:53.040066   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9dn8z" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:53.237342   73775 request.go:632] Waited for 197.202455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9dn8z
	I1004 03:21:53.237407   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9dn8z
	I1004 03:21:53.237413   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:53.237422   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:53.237435   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:53.240515   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:53.437594   73775 request.go:632] Waited for 196.341208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:21:53.437654   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:21:53.437663   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:53.437673   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:53.437683   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:53.440462   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:53.441066   73775 pod_ready.go:93] pod "kube-proxy-9dn8z" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:53.441089   73775 pod_ready.go:82] duration metric: took 401.010269ms for pod "kube-proxy-9dn8z" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:53.441102   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-q9kvx" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:53.637557   73775 request.go:632] Waited for 196.392859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q9kvx
	I1004 03:21:53.637678   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q9kvx
	I1004 03:21:53.637704   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:53.637725   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:53.637732   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:53.641155   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:53.837709   73775 request.go:632] Waited for 195.325527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:21:53.837810   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:21:53.837823   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:53.837833   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:53.837838   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:53.840730   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:53.841277   73775 pod_ready.go:93] pod "kube-proxy-q9kvx" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:53.841297   73775 pod_ready.go:82] duration metric: took 400.187193ms for pod "kube-proxy-q9kvx" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:53.841308   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-481241" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:54.037639   73775 request.go:632] Waited for 196.262711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-481241
	I1004 03:21:54.037723   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-481241
	I1004 03:21:54.037737   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:54.037746   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:54.037755   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:54.040775   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:54.236811   73775 request.go:632] Waited for 195.255456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:21:54.236914   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:21:54.236927   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:54.236937   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:54.236947   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:54.241146   73775 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:54.241739   73775 pod_ready.go:93] pod "kube-scheduler-ha-481241" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:54.241762   73775 pod_ready.go:82] duration metric: took 400.446348ms for pod "kube-scheduler-ha-481241" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:54.241788   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-481241-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:54.437683   73775 request.go:632] Waited for 195.824138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-481241-m02
	I1004 03:21:54.437738   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-481241-m02
	I1004 03:21:54.437744   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:54.437751   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:54.437762   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:54.441708   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:54.637430   73775 request.go:632] Waited for 195.041568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:21:54.637529   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:21:54.637539   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:54.637549   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:54.637553   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:54.640360   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:54.640895   73775 pod_ready.go:93] pod "kube-scheduler-ha-481241-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:54.640917   73775 pod_ready.go:82] duration metric: took 399.11566ms for pod "kube-scheduler-ha-481241-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:54.640929   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-481241-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:54.837288   73775 request.go:632] Waited for 196.292364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-481241-m03
	I1004 03:21:54.837360   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-481241-m03
	I1004 03:21:54.837367   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:54.837377   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:54.837384   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:54.840191   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:55.037409   73775 request.go:632] Waited for 196.235438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241-m03
	I1004 03:21:55.037473   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m03
	I1004 03:21:55.037479   73775 round_trippers.go:469] Request Headers:
	I1004 03:21:55.037494   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:55.037498   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:21:55.043736   73775 round_trippers.go:574] Response Status: 404 Not Found in 6 milliseconds
	I1004 03:21:55.043938   73775 pod_ready.go:98] node "ha-481241-m03" hosting pod "kube-scheduler-ha-481241-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-481241-m03": nodes "ha-481241-m03" not found
	I1004 03:21:55.043958   73775 pod_ready.go:82] duration metric: took 403.022015ms for pod "kube-scheduler-ha-481241-m03" in "kube-system" namespace to be "Ready" ...
	E1004 03:21:55.043969   73775 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-481241-m03" hosting pod "kube-scheduler-ha-481241-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-481241-m03": nodes "ha-481241-m03" not found
	I1004 03:21:55.043979   73775 pod_ready.go:39] duration metric: took 6.412702124s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:21:55.044003   73775 api_server.go:52] waiting for apiserver process to appear ...
	I1004 03:21:55.044068   73775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 03:21:55.056768   73775 api_server.go:72] duration metric: took 25.258939033s to wait for apiserver process to appear ...
	I1004 03:21:55.056797   73775 api_server.go:88] waiting for apiserver healthz status ...
	I1004 03:21:55.056823   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:21:55.064870   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:21:55.064901   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:21:55.557625   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:21:55.565763   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:21:55.565801   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:21:56.057288   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:21:56.065113   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:21:56.065141   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:21:56.557877   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:21:56.565640   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:21:56.565680   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:21:57.057101   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:21:57.064974   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:21:57.065009   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:21:57.557591   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:21:57.565233   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:21:57.565265   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:21:58.056887   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:21:58.065673   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:21:58.065703   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:21:58.557272   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:21:58.564948   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:21:58.564979   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:21:59.057578   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:21:59.067058   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:21:59.067104   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:21:59.556888   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:21:59.564520   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:21:59.564552   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:00.058514   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:00.081985   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:00.082019   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:00.557687   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:00.565319   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:00.565356   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:01.056869   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:01.064476   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:01.064505   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:01.556898   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:01.564549   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:01.564578   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:02.057123   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:02.064807   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:02.064835   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:02.557692   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:02.565575   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:02.565607   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:03.056955   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:03.079654   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:03.079688   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:03.556927   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:03.574044   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:03.574084   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:04.057698   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:04.078957   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:04.078995   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:04.557663   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:04.565395   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:04.565429   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:05.056946   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:05.065686   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:05.065773   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:05.557274   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:05.564858   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:05.564885   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:06.057273   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:06.065203   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:06.065253   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:06.557806   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:06.565527   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:06.565553   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:07.057076   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:07.064675   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:07.064701   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:07.557008   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:07.564740   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:07.564771   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:08.057282   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:08.065008   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:08.065034   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:08.557613   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:08.565308   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:08.565333   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:09.056940   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:09.064535   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:09.064564   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:09.557598   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:09.565598   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:09.565632   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:10.057050   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:10.064749   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:10.064776   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:10.557136   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:10.564780   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:10.564808   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:11.057298   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:11.064896   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:11.064929   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:11.557337   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:11.564985   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:11.565013   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:12.057620   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:12.065333   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:12.065363   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:12.556946   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:12.564676   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:12.564706   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:13.057279   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:13.065137   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:13.065163   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:13.557756   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:13.565737   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:13.565766   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:14.057241   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:14.065245   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:14.065274   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:14.557668   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:14.565400   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:14.565430   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:15.057026   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:15.065344   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:15.065376   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:15.557920   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:15.565691   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:15.565746   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:16.057291   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:16.081035   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:16.081063   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:16.557745   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:16.566905   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:16.566935   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:17.057170   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:17.064927   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:17.064955   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:17.557349   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:17.565082   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:17.565115   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:18.056955   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:18.064696   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:18.064728   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:18.557019   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:18.565086   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:18.565115   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:19.057665   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:19.065900   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:19.065930   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:19.557863   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:19.567534   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:19.567574   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:20.056936   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:20.065527   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:20.065571   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:20.556958   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:20.564718   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:20.564744   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:21.056884   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:21.065108   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:21.065136   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:21.557664   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:21.565340   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:21.565369   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:22.056896   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:22.065289   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:22.065316   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:22.556883   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:22.564628   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:22.564658   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:23.057288   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:23.065266   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:23.065299   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:23.556883   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:23.564665   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:23.564693   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:24.056954   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:24.066042   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:24.066074   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:24.557726   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:24.568526   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:24.568559   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:25.057642   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:25.067245   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:25.067275   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:25.556874   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:25.565300   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:25.565329   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:26.057906   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:26.067219   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:26.067252   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:26.557930   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:26.565595   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:26.565629   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:27.056978   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:27.064551   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:27.064594   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:27.556943   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:27.564643   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:27.564669   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:28.057060   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:28.064788   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:28.064854   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:28.557590   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:28.565216   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:28.565242   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:29.057591   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:29.065490   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:29.065518   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:29.557193   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:29.565112   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 03:22:29.565144   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 03:22:30.057039   73775 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 03:22:30.057244   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 03:22:30.128706   73775 cri.go:89] found id: "e3e1e7dd624b051c9cbb07c2dd117a8bcdd8bbc9cc7a322424c638fe0d09bc5a"
	I1004 03:22:30.128774   73775 cri.go:89] found id: "12e394c4eeb98babb8af75685faed08bcbfa52b432f6f654726f31b9b4b29e9a"
	I1004 03:22:30.128794   73775 cri.go:89] found id: ""
	I1004 03:22:30.128820   73775 logs.go:282] 2 containers: [e3e1e7dd624b051c9cbb07c2dd117a8bcdd8bbc9cc7a322424c638fe0d09bc5a 12e394c4eeb98babb8af75685faed08bcbfa52b432f6f654726f31b9b4b29e9a]
	I1004 03:22:30.128913   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:30.134421   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:30.138907   73775 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 03:22:30.139038   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 03:22:30.197321   73775 cri.go:89] found id: "d9aee03c59e61913912252a19a1b08d202849714d014a4cea26ce3cfb0fd6743"
	I1004 03:22:30.197400   73775 cri.go:89] found id: "2c936133cb4df183d12ccca6443647ea7a06a9c9edd34de1ab261977f9503319"
	I1004 03:22:30.197420   73775 cri.go:89] found id: ""
	I1004 03:22:30.197444   73775 logs.go:282] 2 containers: [d9aee03c59e61913912252a19a1b08d202849714d014a4cea26ce3cfb0fd6743 2c936133cb4df183d12ccca6443647ea7a06a9c9edd34de1ab261977f9503319]
	I1004 03:22:30.197541   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:30.201705   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:30.206138   73775 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 03:22:30.206260   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 03:22:30.259182   73775 cri.go:89] found id: ""
	I1004 03:22:30.259259   73775 logs.go:282] 0 containers: []
	W1004 03:22:30.259283   73775 logs.go:284] No container was found matching "coredns"
	I1004 03:22:30.259306   73775 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 03:22:30.259405   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 03:22:30.315945   73775 cri.go:89] found id: "c2b4c970c8af8245984a2fc3d61f79b93f1ebd053526ef017539d094ec780a51"
	I1004 03:22:30.316034   73775 cri.go:89] found id: "8428cce5759dc59ce912385e346d8673cf6effe80d7bc0e9b859c4814324c3be"
	I1004 03:22:30.316062   73775 cri.go:89] found id: ""
	I1004 03:22:30.316106   73775 logs.go:282] 2 containers: [c2b4c970c8af8245984a2fc3d61f79b93f1ebd053526ef017539d094ec780a51 8428cce5759dc59ce912385e346d8673cf6effe80d7bc0e9b859c4814324c3be]
	I1004 03:22:30.316187   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:30.320201   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:30.323999   73775 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 03:22:30.324116   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 03:22:30.374952   73775 cri.go:89] found id: "6293fde9f4abd87b9929dd844b8ebbc4db80b7038b8081fface86eccebebdeb7"
	I1004 03:22:30.375027   73775 cri.go:89] found id: ""
	I1004 03:22:30.375049   73775 logs.go:282] 1 containers: [6293fde9f4abd87b9929dd844b8ebbc4db80b7038b8081fface86eccebebdeb7]
	I1004 03:22:30.375138   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:30.379428   73775 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 03:22:30.379548   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 03:22:30.438733   73775 cri.go:89] found id: "708d9e24b6e5df5b8d1d57728b176c1e813b7ad84581b2c18477695c62c02352"
	I1004 03:22:30.438806   73775 cri.go:89] found id: "b087094a54952b5f7bb5c968fe6ecdf769f1b78c663e4796f7e98160fffab2e9"
	I1004 03:22:30.438825   73775 cri.go:89] found id: ""
	I1004 03:22:30.438849   73775 logs.go:282] 2 containers: [708d9e24b6e5df5b8d1d57728b176c1e813b7ad84581b2c18477695c62c02352 b087094a54952b5f7bb5c968fe6ecdf769f1b78c663e4796f7e98160fffab2e9]
	I1004 03:22:30.438933   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:30.442969   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:30.446830   73775 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 03:22:30.446951   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 03:22:30.507998   73775 cri.go:89] found id: "fddbd17e6f0b8b3878213f8b65ac74d1d6d73509b63a07659ff7df15e7edc380"
	I1004 03:22:30.508062   73775 cri.go:89] found id: ""
	I1004 03:22:30.508084   73775 logs.go:282] 1 containers: [fddbd17e6f0b8b3878213f8b65ac74d1d6d73509b63a07659ff7df15e7edc380]
	I1004 03:22:30.508179   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:30.512603   73775 logs.go:123] Gathering logs for etcd [d9aee03c59e61913912252a19a1b08d202849714d014a4cea26ce3cfb0fd6743] ...
	I1004 03:22:30.512676   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9aee03c59e61913912252a19a1b08d202849714d014a4cea26ce3cfb0fd6743"
	I1004 03:22:30.587414   73775 logs.go:123] Gathering logs for kube-apiserver [12e394c4eeb98babb8af75685faed08bcbfa52b432f6f654726f31b9b4b29e9a] ...
	I1004 03:22:30.587647   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e394c4eeb98babb8af75685faed08bcbfa52b432f6f654726f31b9b4b29e9a"
	I1004 03:22:30.636353   73775 logs.go:123] Gathering logs for etcd [2c936133cb4df183d12ccca6443647ea7a06a9c9edd34de1ab261977f9503319] ...
	I1004 03:22:30.636378   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c936133cb4df183d12ccca6443647ea7a06a9c9edd34de1ab261977f9503319"
	I1004 03:22:30.730211   73775 logs.go:123] Gathering logs for kube-scheduler [c2b4c970c8af8245984a2fc3d61f79b93f1ebd053526ef017539d094ec780a51] ...
	I1004 03:22:30.730285   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2b4c970c8af8245984a2fc3d61f79b93f1ebd053526ef017539d094ec780a51"
	I1004 03:22:30.821580   73775 logs.go:123] Gathering logs for kube-scheduler [8428cce5759dc59ce912385e346d8673cf6effe80d7bc0e9b859c4814324c3be] ...
	I1004 03:22:30.821657   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8428cce5759dc59ce912385e346d8673cf6effe80d7bc0e9b859c4814324c3be"
	I1004 03:22:30.878416   73775 logs.go:123] Gathering logs for kube-controller-manager [b087094a54952b5f7bb5c968fe6ecdf769f1b78c663e4796f7e98160fffab2e9] ...
	I1004 03:22:30.878442   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b087094a54952b5f7bb5c968fe6ecdf769f1b78c663e4796f7e98160fffab2e9"
	I1004 03:22:30.925335   73775 logs.go:123] Gathering logs for kubelet ...
	I1004 03:22:30.925358   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 03:22:31.004968   73775 logs.go:123] Gathering logs for dmesg ...
	I1004 03:22:31.005007   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 03:22:31.018128   73775 logs.go:123] Gathering logs for describe nodes ...
	I1004 03:22:31.018155   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 03:22:31.448789   73775 logs.go:123] Gathering logs for container status ...
	I1004 03:22:31.448821   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 03:22:31.508633   73775 logs.go:123] Gathering logs for kube-apiserver [e3e1e7dd624b051c9cbb07c2dd117a8bcdd8bbc9cc7a322424c638fe0d09bc5a] ...
	I1004 03:22:31.508662   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e1e7dd624b051c9cbb07c2dd117a8bcdd8bbc9cc7a322424c638fe0d09bc5a"
	I1004 03:22:31.568699   73775 logs.go:123] Gathering logs for kube-controller-manager [708d9e24b6e5df5b8d1d57728b176c1e813b7ad84581b2c18477695c62c02352] ...
	I1004 03:22:31.568778   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 708d9e24b6e5df5b8d1d57728b176c1e813b7ad84581b2c18477695c62c02352"
	I1004 03:22:31.639133   73775 logs.go:123] Gathering logs for kube-proxy [6293fde9f4abd87b9929dd844b8ebbc4db80b7038b8081fface86eccebebdeb7] ...
	I1004 03:22:31.639182   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6293fde9f4abd87b9929dd844b8ebbc4db80b7038b8081fface86eccebebdeb7"
	I1004 03:22:31.694850   73775 logs.go:123] Gathering logs for kindnet [fddbd17e6f0b8b3878213f8b65ac74d1d6d73509b63a07659ff7df15e7edc380] ...
	I1004 03:22:31.694879   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fddbd17e6f0b8b3878213f8b65ac74d1d6d73509b63a07659ff7df15e7edc380"
	I1004 03:22:31.751806   73775 logs.go:123] Gathering logs for CRI-O ...
	I1004 03:22:31.751834   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 03:22:34.331340   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:34.449873   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 03:22:34.449902   73775 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 03:22:34.449926   73775 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 03:22:34.449986   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 03:22:34.518790   73775 cri.go:89] found id: "e3e1e7dd624b051c9cbb07c2dd117a8bcdd8bbc9cc7a322424c638fe0d09bc5a"
	I1004 03:22:34.518809   73775 cri.go:89] found id: "12e394c4eeb98babb8af75685faed08bcbfa52b432f6f654726f31b9b4b29e9a"
	I1004 03:22:34.518814   73775 cri.go:89] found id: ""
	I1004 03:22:34.518821   73775 logs.go:282] 2 containers: [e3e1e7dd624b051c9cbb07c2dd117a8bcdd8bbc9cc7a322424c638fe0d09bc5a 12e394c4eeb98babb8af75685faed08bcbfa52b432f6f654726f31b9b4b29e9a]
	I1004 03:22:34.518877   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:34.522645   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:34.526227   73775 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 03:22:34.526293   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 03:22:34.609938   73775 cri.go:89] found id: "d9aee03c59e61913912252a19a1b08d202849714d014a4cea26ce3cfb0fd6743"
	I1004 03:22:34.609959   73775 cri.go:89] found id: "2c936133cb4df183d12ccca6443647ea7a06a9c9edd34de1ab261977f9503319"
	I1004 03:22:34.609964   73775 cri.go:89] found id: ""
	I1004 03:22:34.609971   73775 logs.go:282] 2 containers: [d9aee03c59e61913912252a19a1b08d202849714d014a4cea26ce3cfb0fd6743 2c936133cb4df183d12ccca6443647ea7a06a9c9edd34de1ab261977f9503319]
	I1004 03:22:34.610028   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:34.616741   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:34.623839   73775 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 03:22:34.623927   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 03:22:34.691222   73775 cri.go:89] found id: ""
	I1004 03:22:34.691244   73775 logs.go:282] 0 containers: []
	W1004 03:22:34.691252   73775 logs.go:284] No container was found matching "coredns"
	I1004 03:22:34.691276   73775 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 03:22:34.691337   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 03:22:34.773433   73775 cri.go:89] found id: "c2b4c970c8af8245984a2fc3d61f79b93f1ebd053526ef017539d094ec780a51"
	I1004 03:22:34.773453   73775 cri.go:89] found id: "8428cce5759dc59ce912385e346d8673cf6effe80d7bc0e9b859c4814324c3be"
	I1004 03:22:34.773458   73775 cri.go:89] found id: ""
	I1004 03:22:34.773464   73775 logs.go:282] 2 containers: [c2b4c970c8af8245984a2fc3d61f79b93f1ebd053526ef017539d094ec780a51 8428cce5759dc59ce912385e346d8673cf6effe80d7bc0e9b859c4814324c3be]
	I1004 03:22:34.773516   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:34.779955   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:34.783429   73775 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 03:22:34.783520   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 03:22:34.826583   73775 cri.go:89] found id: "6293fde9f4abd87b9929dd844b8ebbc4db80b7038b8081fface86eccebebdeb7"
	I1004 03:22:34.826610   73775 cri.go:89] found id: ""
	I1004 03:22:34.826619   73775 logs.go:282] 1 containers: [6293fde9f4abd87b9929dd844b8ebbc4db80b7038b8081fface86eccebebdeb7]
	I1004 03:22:34.826672   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:34.830322   73775 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 03:22:34.830397   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 03:22:34.867194   73775 cri.go:89] found id: "708d9e24b6e5df5b8d1d57728b176c1e813b7ad84581b2c18477695c62c02352"
	I1004 03:22:34.867213   73775 cri.go:89] found id: "b087094a54952b5f7bb5c968fe6ecdf769f1b78c663e4796f7e98160fffab2e9"
	I1004 03:22:34.867218   73775 cri.go:89] found id: ""
	I1004 03:22:34.867225   73775 logs.go:282] 2 containers: [708d9e24b6e5df5b8d1d57728b176c1e813b7ad84581b2c18477695c62c02352 b087094a54952b5f7bb5c968fe6ecdf769f1b78c663e4796f7e98160fffab2e9]
	I1004 03:22:34.867278   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:34.870792   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:34.874559   73775 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 03:22:34.874686   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 03:22:34.945656   73775 cri.go:89] found id: "fddbd17e6f0b8b3878213f8b65ac74d1d6d73509b63a07659ff7df15e7edc380"
	I1004 03:22:34.945728   73775 cri.go:89] found id: ""
	I1004 03:22:34.945749   73775 logs.go:282] 1 containers: [fddbd17e6f0b8b3878213f8b65ac74d1d6d73509b63a07659ff7df15e7edc380]
	I1004 03:22:34.945848   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:34.952234   73775 logs.go:123] Gathering logs for kube-apiserver [12e394c4eeb98babb8af75685faed08bcbfa52b432f6f654726f31b9b4b29e9a] ...
	I1004 03:22:34.952303   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e394c4eeb98babb8af75685faed08bcbfa52b432f6f654726f31b9b4b29e9a"
	I1004 03:22:35.015887   73775 logs.go:123] Gathering logs for etcd [d9aee03c59e61913912252a19a1b08d202849714d014a4cea26ce3cfb0fd6743] ...
	I1004 03:22:35.015965   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9aee03c59e61913912252a19a1b08d202849714d014a4cea26ce3cfb0fd6743"
	I1004 03:22:35.080703   73775 logs.go:123] Gathering logs for etcd [2c936133cb4df183d12ccca6443647ea7a06a9c9edd34de1ab261977f9503319] ...
	I1004 03:22:35.080749   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c936133cb4df183d12ccca6443647ea7a06a9c9edd34de1ab261977f9503319"
	I1004 03:22:35.136103   73775 logs.go:123] Gathering logs for kube-controller-manager [708d9e24b6e5df5b8d1d57728b176c1e813b7ad84581b2c18477695c62c02352] ...
	I1004 03:22:35.136137   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 708d9e24b6e5df5b8d1d57728b176c1e813b7ad84581b2c18477695c62c02352"
	I1004 03:22:35.193986   73775 logs.go:123] Gathering logs for describe nodes ...
	I1004 03:22:35.194025   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 03:22:35.571972   73775 logs.go:123] Gathering logs for kube-scheduler [c2b4c970c8af8245984a2fc3d61f79b93f1ebd053526ef017539d094ec780a51] ...
	I1004 03:22:35.572044   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2b4c970c8af8245984a2fc3d61f79b93f1ebd053526ef017539d094ec780a51"
	I1004 03:22:35.651992   73775 logs.go:123] Gathering logs for kube-scheduler [8428cce5759dc59ce912385e346d8673cf6effe80d7bc0e9b859c4814324c3be] ...
	I1004 03:22:35.652207   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8428cce5759dc59ce912385e346d8673cf6effe80d7bc0e9b859c4814324c3be"
	I1004 03:22:35.722739   73775 logs.go:123] Gathering logs for kube-proxy [6293fde9f4abd87b9929dd844b8ebbc4db80b7038b8081fface86eccebebdeb7] ...
	I1004 03:22:35.722818   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6293fde9f4abd87b9929dd844b8ebbc4db80b7038b8081fface86eccebebdeb7"
	I1004 03:22:35.772559   73775 logs.go:123] Gathering logs for kubelet ...
	I1004 03:22:35.772642   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 03:22:35.854615   73775 logs.go:123] Gathering logs for kube-controller-manager [b087094a54952b5f7bb5c968fe6ecdf769f1b78c663e4796f7e98160fffab2e9] ...
	I1004 03:22:35.854651   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b087094a54952b5f7bb5c968fe6ecdf769f1b78c663e4796f7e98160fffab2e9"
	I1004 03:22:35.902529   73775 logs.go:123] Gathering logs for kindnet [fddbd17e6f0b8b3878213f8b65ac74d1d6d73509b63a07659ff7df15e7edc380] ...
	I1004 03:22:35.902567   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fddbd17e6f0b8b3878213f8b65ac74d1d6d73509b63a07659ff7df15e7edc380"
	I1004 03:22:35.948412   73775 logs.go:123] Gathering logs for container status ...
	I1004 03:22:35.948443   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 03:22:35.995880   73775 logs.go:123] Gathering logs for dmesg ...
	I1004 03:22:35.995911   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 03:22:36.008571   73775 logs.go:123] Gathering logs for kube-apiserver [e3e1e7dd624b051c9cbb07c2dd117a8bcdd8bbc9cc7a322424c638fe0d09bc5a] ...
	I1004 03:22:36.008598   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e1e7dd624b051c9cbb07c2dd117a8bcdd8bbc9cc7a322424c638fe0d09bc5a"
	I1004 03:22:36.064030   73775 logs.go:123] Gathering logs for CRI-O ...
	I1004 03:22:36.064063   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 03:22:38.636070   73775 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 03:22:38.648222   73775 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1004 03:22:38.648298   73775 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I1004 03:22:38.648304   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:38.648314   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:38.648318   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:38.665718   73775 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1004 03:22:38.666044   73775 api_server.go:141] control plane version: v1.31.1
	I1004 03:22:38.666065   73775 api_server.go:131] duration metric: took 43.609260143s to wait for apiserver health ...
	I1004 03:22:38.666079   73775 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 03:22:38.666098   73775 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 03:22:38.666155   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 03:22:38.740998   73775 cri.go:89] found id: "e3e1e7dd624b051c9cbb07c2dd117a8bcdd8bbc9cc7a322424c638fe0d09bc5a"
	I1004 03:22:38.741027   73775 cri.go:89] found id: "12e394c4eeb98babb8af75685faed08bcbfa52b432f6f654726f31b9b4b29e9a"
	I1004 03:22:38.741033   73775 cri.go:89] found id: ""
	I1004 03:22:38.741043   73775 logs.go:282] 2 containers: [e3e1e7dd624b051c9cbb07c2dd117a8bcdd8bbc9cc7a322424c638fe0d09bc5a 12e394c4eeb98babb8af75685faed08bcbfa52b432f6f654726f31b9b4b29e9a]
	I1004 03:22:38.741101   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:38.746182   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:38.749970   73775 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 03:22:38.750038   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 03:22:38.808031   73775 cri.go:89] found id: "d9aee03c59e61913912252a19a1b08d202849714d014a4cea26ce3cfb0fd6743"
	I1004 03:22:38.808099   73775 cri.go:89] found id: "2c936133cb4df183d12ccca6443647ea7a06a9c9edd34de1ab261977f9503319"
	I1004 03:22:38.808119   73775 cri.go:89] found id: ""
	I1004 03:22:38.808146   73775 logs.go:282] 2 containers: [d9aee03c59e61913912252a19a1b08d202849714d014a4cea26ce3cfb0fd6743 2c936133cb4df183d12ccca6443647ea7a06a9c9edd34de1ab261977f9503319]
	I1004 03:22:38.808256   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:38.812514   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:38.816751   73775 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 03:22:38.816868   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 03:22:38.875587   73775 cri.go:89] found id: ""
	I1004 03:22:38.875665   73775 logs.go:282] 0 containers: []
	W1004 03:22:38.875687   73775 logs.go:284] No container was found matching "coredns"
	I1004 03:22:38.875710   73775 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 03:22:38.875797   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 03:22:38.953959   73775 cri.go:89] found id: "c2b4c970c8af8245984a2fc3d61f79b93f1ebd053526ef017539d094ec780a51"
	I1004 03:22:38.954035   73775 cri.go:89] found id: "8428cce5759dc59ce912385e346d8673cf6effe80d7bc0e9b859c4814324c3be"
	I1004 03:22:38.954057   73775 cri.go:89] found id: ""
	I1004 03:22:38.954081   73775 logs.go:282] 2 containers: [c2b4c970c8af8245984a2fc3d61f79b93f1ebd053526ef017539d094ec780a51 8428cce5759dc59ce912385e346d8673cf6effe80d7bc0e9b859c4814324c3be]
	I1004 03:22:38.954193   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:38.958635   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:38.962584   73775 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 03:22:38.962711   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 03:22:39.016232   73775 cri.go:89] found id: "6293fde9f4abd87b9929dd844b8ebbc4db80b7038b8081fface86eccebebdeb7"
	I1004 03:22:39.016314   73775 cri.go:89] found id: ""
	I1004 03:22:39.016348   73775 logs.go:282] 1 containers: [6293fde9f4abd87b9929dd844b8ebbc4db80b7038b8081fface86eccebebdeb7]
	I1004 03:22:39.016458   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:39.021666   73775 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 03:22:39.021811   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 03:22:39.077717   73775 cri.go:89] found id: "708d9e24b6e5df5b8d1d57728b176c1e813b7ad84581b2c18477695c62c02352"
	I1004 03:22:39.077749   73775 cri.go:89] found id: "b087094a54952b5f7bb5c968fe6ecdf769f1b78c663e4796f7e98160fffab2e9"
	I1004 03:22:39.077755   73775 cri.go:89] found id: ""
	I1004 03:22:39.077761   73775 logs.go:282] 2 containers: [708d9e24b6e5df5b8d1d57728b176c1e813b7ad84581b2c18477695c62c02352 b087094a54952b5f7bb5c968fe6ecdf769f1b78c663e4796f7e98160fffab2e9]
	I1004 03:22:39.077826   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:39.083709   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:39.087880   73775 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 03:22:39.087954   73775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 03:22:39.140245   73775 cri.go:89] found id: "fddbd17e6f0b8b3878213f8b65ac74d1d6d73509b63a07659ff7df15e7edc380"
	I1004 03:22:39.140270   73775 cri.go:89] found id: ""
	I1004 03:22:39.140278   73775 logs.go:282] 1 containers: [fddbd17e6f0b8b3878213f8b65ac74d1d6d73509b63a07659ff7df15e7edc380]
	I1004 03:22:39.140340   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:39.145590   73775 logs.go:123] Gathering logs for dmesg ...
	I1004 03:22:39.145619   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 03:22:39.160546   73775 logs.go:123] Gathering logs for etcd [2c936133cb4df183d12ccca6443647ea7a06a9c9edd34de1ab261977f9503319] ...
	I1004 03:22:39.160577   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c936133cb4df183d12ccca6443647ea7a06a9c9edd34de1ab261977f9503319"
	I1004 03:22:39.215878   73775 logs.go:123] Gathering logs for kube-scheduler [c2b4c970c8af8245984a2fc3d61f79b93f1ebd053526ef017539d094ec780a51] ...
	I1004 03:22:39.215914   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2b4c970c8af8245984a2fc3d61f79b93f1ebd053526ef017539d094ec780a51"
	I1004 03:22:39.269995   73775 logs.go:123] Gathering logs for kubelet ...
	I1004 03:22:39.270034   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 03:22:39.358464   73775 logs.go:123] Gathering logs for describe nodes ...
	I1004 03:22:39.358503   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 03:22:39.667158   73775 logs.go:123] Gathering logs for etcd [d9aee03c59e61913912252a19a1b08d202849714d014a4cea26ce3cfb0fd6743] ...
	I1004 03:22:39.667237   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9aee03c59e61913912252a19a1b08d202849714d014a4cea26ce3cfb0fd6743"
	I1004 03:22:39.735484   73775 logs.go:123] Gathering logs for kube-proxy [6293fde9f4abd87b9929dd844b8ebbc4db80b7038b8081fface86eccebebdeb7] ...
	I1004 03:22:39.735521   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6293fde9f4abd87b9929dd844b8ebbc4db80b7038b8081fface86eccebebdeb7"
	I1004 03:22:39.780991   73775 logs.go:123] Gathering logs for CRI-O ...
	I1004 03:22:39.781022   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 03:22:39.853609   73775 logs.go:123] Gathering logs for container status ...
	I1004 03:22:39.853647   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 03:22:39.916673   73775 logs.go:123] Gathering logs for kube-apiserver [12e394c4eeb98babb8af75685faed08bcbfa52b432f6f654726f31b9b4b29e9a] ...
	I1004 03:22:39.916702   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e394c4eeb98babb8af75685faed08bcbfa52b432f6f654726f31b9b4b29e9a"
	I1004 03:22:39.955840   73775 logs.go:123] Gathering logs for kube-scheduler [8428cce5759dc59ce912385e346d8673cf6effe80d7bc0e9b859c4814324c3be] ...
	I1004 03:22:39.955871   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8428cce5759dc59ce912385e346d8673cf6effe80d7bc0e9b859c4814324c3be"
	I1004 03:22:40.000086   73775 logs.go:123] Gathering logs for kube-controller-manager [708d9e24b6e5df5b8d1d57728b176c1e813b7ad84581b2c18477695c62c02352] ...
	I1004 03:22:40.000117   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 708d9e24b6e5df5b8d1d57728b176c1e813b7ad84581b2c18477695c62c02352"
	I1004 03:22:40.072546   73775 logs.go:123] Gathering logs for kube-controller-manager [b087094a54952b5f7bb5c968fe6ecdf769f1b78c663e4796f7e98160fffab2e9] ...
	I1004 03:22:40.072584   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b087094a54952b5f7bb5c968fe6ecdf769f1b78c663e4796f7e98160fffab2e9"
	I1004 03:22:40.129186   73775 logs.go:123] Gathering logs for kindnet [fddbd17e6f0b8b3878213f8b65ac74d1d6d73509b63a07659ff7df15e7edc380] ...
	I1004 03:22:40.129240   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fddbd17e6f0b8b3878213f8b65ac74d1d6d73509b63a07659ff7df15e7edc380"
	I1004 03:22:40.228504   73775 logs.go:123] Gathering logs for kube-apiserver [e3e1e7dd624b051c9cbb07c2dd117a8bcdd8bbc9cc7a322424c638fe0d09bc5a] ...
	I1004 03:22:40.228535   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e1e7dd624b051c9cbb07c2dd117a8bcdd8bbc9cc7a322424c638fe0d09bc5a"
	I1004 03:22:42.827256   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1004 03:22:42.827281   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:42.827291   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:42.827296   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:42.835129   73775 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 03:22:42.844673   73775 system_pods.go:59] 26 kube-system pods found
	I1004 03:22:42.844710   73775 system_pods.go:61] "coredns-7c65d6cfc9-bmz2w" [0f4b2b8f-84ff-45e0-91de-16c7bfb3baed] Running
	I1004 03:22:42.844721   73775 system_pods.go:61] "coredns-7c65d6cfc9-md2qq" [9751f271-5ea1-4fab-960e-ce08f8d5ac2f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 03:22:42.844727   73775 system_pods.go:61] "etcd-ha-481241" [0a2cc584-6b5f-4130-bf95-c133a793da53] Running
	I1004 03:22:42.844733   73775 system_pods.go:61] "etcd-ha-481241-m02" [a6401375-f649-4d55-a7f5-452b2de336cf] Running
	I1004 03:22:42.844738   73775 system_pods.go:61] "etcd-ha-481241-m03" [8d87c444-ad0b-409a-bf72-c366111d4a92] Running
	I1004 03:22:42.844742   73775 system_pods.go:61] "kindnet-2rz67" [adf987ba-28ce-4da3-8d4b-9846713a3008] Running
	I1004 03:22:42.844746   73775 system_pods.go:61] "kindnet-gczfk" [a7e6bbd2-cf8f-45cf-b231-43dadb991b78] Running
	I1004 03:22:42.844750   73775 system_pods.go:61] "kindnet-lbg2z" [16071e75-c8e9-4508-b960-912a959462c5] Running
	I1004 03:22:42.844757   73775 system_pods.go:61] "kindnet-nvptn" [a174f174-c4f8-4c27-81d0-480d0a7f6b8a] Running
	I1004 03:22:42.844763   73775 system_pods.go:61] "kube-apiserver-ha-481241" [24a28fc4-7302-4215-85c4-5bffbbfe726b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 03:22:42.844776   73775 system_pods.go:61] "kube-apiserver-ha-481241-m02" [68b737d9-9572-4f7b-8e4e-bb6ce24d4353] Running
	I1004 03:22:42.844781   73775 system_pods.go:61] "kube-apiserver-ha-481241-m03" [581af2ef-c866-44a9-ae86-fd3cc19c77f4] Running
	I1004 03:22:42.844788   73775 system_pods.go:61] "kube-controller-manager-ha-481241" [bc1b0969-7aae-4587-86de-60c57d415df3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 03:22:42.844796   73775 system_pods.go:61] "kube-controller-manager-ha-481241-m02" [857ae9bd-728d-4136-9b43-fc062278ac91] Running
	I1004 03:22:42.844801   73775 system_pods.go:61] "kube-controller-manager-ha-481241-m03" [26744163-4473-4c4e-b869-ba59e820b55e] Running
	I1004 03:22:42.844804   73775 system_pods.go:61] "kube-proxy-25pr9" [44a32359-578f-4457-a9ba-e84034957cc6] Running
	I1004 03:22:42.844809   73775 system_pods.go:61] "kube-proxy-7hdvx" [8b93ed7b-58e8-4e11-a4c5-b43077465fa8] Running
	I1004 03:22:42.844815   73775 system_pods.go:61] "kube-proxy-9dn8z" [f453d599-b235-4a2d-944c-a2a9f5de08d2] Running
	I1004 03:22:42.844819   73775 system_pods.go:61] "kube-proxy-q9kvx" [895a6057-11e9-4173-a608-6dff90b695ca] Running
	I1004 03:22:42.844825   73775 system_pods.go:61] "kube-scheduler-ha-481241" [8f929949-d2dc-4edd-9f78-2342dd9b3559] Running
	I1004 03:22:42.844830   73775 system_pods.go:61] "kube-scheduler-ha-481241-m02" [693f1028-34a5-4f1d-a9da-95bd39bfe1f1] Running
	I1004 03:22:42.844833   73775 system_pods.go:61] "kube-scheduler-ha-481241-m03" [416b4129-e57f-4f87-b018-c328938c7ea2] Running
	I1004 03:22:42.844837   73775 system_pods.go:61] "kube-vip-ha-481241" [e0c2b691-a598-4505-98db-e64af19a342a] Running
	I1004 03:22:42.844840   73775 system_pods.go:61] "kube-vip-ha-481241-m02" [5c8006a6-96c6-41f2-ba72-0fd4eec4eb64] Running
	I1004 03:22:42.844845   73775 system_pods.go:61] "kube-vip-ha-481241-m03" [8debcda7-2b18-4acc-9470-441f27c103a3] Running
	I1004 03:22:42.844854   73775 system_pods.go:61] "storage-provisioner" [c4529fe7-bbb7-4549-901b-a25810afc1b5] Running
	I1004 03:22:42.844860   73775 system_pods.go:74] duration metric: took 4.178775341s to wait for pod list to return data ...
	I1004 03:22:42.844868   73775 default_sa.go:34] waiting for default service account to be created ...
	I1004 03:22:42.844956   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1004 03:22:42.844967   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:42.844975   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:42.844979   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:42.848235   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:22:42.848473   73775 default_sa.go:45] found service account: "default"
	I1004 03:22:42.848493   73775 default_sa.go:55] duration metric: took 3.616784ms for default service account to be created ...
	I1004 03:22:42.848502   73775 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 03:22:42.848563   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1004 03:22:42.848571   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:42.848579   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:42.848589   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:42.853755   73775 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1004 03:22:42.864356   73775 system_pods.go:86] 26 kube-system pods found
	I1004 03:22:42.864396   73775 system_pods.go:89] "coredns-7c65d6cfc9-bmz2w" [0f4b2b8f-84ff-45e0-91de-16c7bfb3baed] Running
	I1004 03:22:42.864407   73775 system_pods.go:89] "coredns-7c65d6cfc9-md2qq" [9751f271-5ea1-4fab-960e-ce08f8d5ac2f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 03:22:42.864414   73775 system_pods.go:89] "etcd-ha-481241" [0a2cc584-6b5f-4130-bf95-c133a793da53] Running
	I1004 03:22:42.864421   73775 system_pods.go:89] "etcd-ha-481241-m02" [a6401375-f649-4d55-a7f5-452b2de336cf] Running
	I1004 03:22:42.864425   73775 system_pods.go:89] "etcd-ha-481241-m03" [8d87c444-ad0b-409a-bf72-c366111d4a92] Running
	I1004 03:22:42.864432   73775 system_pods.go:89] "kindnet-2rz67" [adf987ba-28ce-4da3-8d4b-9846713a3008] Running
	I1004 03:22:42.864444   73775 system_pods.go:89] "kindnet-gczfk" [a7e6bbd2-cf8f-45cf-b231-43dadb991b78] Running
	I1004 03:22:42.864449   73775 system_pods.go:89] "kindnet-lbg2z" [16071e75-c8e9-4508-b960-912a959462c5] Running
	I1004 03:22:42.864455   73775 system_pods.go:89] "kindnet-nvptn" [a174f174-c4f8-4c27-81d0-480d0a7f6b8a] Running
	I1004 03:22:42.864462   73775 system_pods.go:89] "kube-apiserver-ha-481241" [24a28fc4-7302-4215-85c4-5bffbbfe726b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 03:22:42.864472   73775 system_pods.go:89] "kube-apiserver-ha-481241-m02" [68b737d9-9572-4f7b-8e4e-bb6ce24d4353] Running
	I1004 03:22:42.864477   73775 system_pods.go:89] "kube-apiserver-ha-481241-m03" [581af2ef-c866-44a9-ae86-fd3cc19c77f4] Running
	I1004 03:22:42.864484   73775 system_pods.go:89] "kube-controller-manager-ha-481241" [bc1b0969-7aae-4587-86de-60c57d415df3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 03:22:42.864493   73775 system_pods.go:89] "kube-controller-manager-ha-481241-m02" [857ae9bd-728d-4136-9b43-fc062278ac91] Running
	I1004 03:22:42.864498   73775 system_pods.go:89] "kube-controller-manager-ha-481241-m03" [26744163-4473-4c4e-b869-ba59e820b55e] Running
	I1004 03:22:42.864502   73775 system_pods.go:89] "kube-proxy-25pr9" [44a32359-578f-4457-a9ba-e84034957cc6] Running
	I1004 03:22:42.864510   73775 system_pods.go:89] "kube-proxy-7hdvx" [8b93ed7b-58e8-4e11-a4c5-b43077465fa8] Running
	I1004 03:22:42.864514   73775 system_pods.go:89] "kube-proxy-9dn8z" [f453d599-b235-4a2d-944c-a2a9f5de08d2] Running
	I1004 03:22:42.864518   73775 system_pods.go:89] "kube-proxy-q9kvx" [895a6057-11e9-4173-a608-6dff90b695ca] Running
	I1004 03:22:42.864526   73775 system_pods.go:89] "kube-scheduler-ha-481241" [8f929949-d2dc-4edd-9f78-2342dd9b3559] Running
	I1004 03:22:42.864530   73775 system_pods.go:89] "kube-scheduler-ha-481241-m02" [693f1028-34a5-4f1d-a9da-95bd39bfe1f1] Running
	I1004 03:22:42.864540   73775 system_pods.go:89] "kube-scheduler-ha-481241-m03" [416b4129-e57f-4f87-b018-c328938c7ea2] Running
	I1004 03:22:42.864545   73775 system_pods.go:89] "kube-vip-ha-481241" [e0c2b691-a598-4505-98db-e64af19a342a] Running
	I1004 03:22:42.864550   73775 system_pods.go:89] "kube-vip-ha-481241-m02" [5c8006a6-96c6-41f2-ba72-0fd4eec4eb64] Running
	I1004 03:22:42.864557   73775 system_pods.go:89] "kube-vip-ha-481241-m03" [8debcda7-2b18-4acc-9470-441f27c103a3] Running
	I1004 03:22:42.864561   73775 system_pods.go:89] "storage-provisioner" [c4529fe7-bbb7-4549-901b-a25810afc1b5] Running
	I1004 03:22:42.864568   73775 system_pods.go:126] duration metric: took 16.060566ms to wait for k8s-apps to be running ...
	I1004 03:22:42.864579   73775 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 03:22:42.864642   73775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:22:42.878044   73775 system_svc.go:56] duration metric: took 13.454705ms WaitForService to wait for kubelet
	I1004 03:22:42.878091   73775 kubeadm.go:582] duration metric: took 1m13.080266332s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:22:42.878112   73775 node_conditions.go:102] verifying NodePressure condition ...
	I1004 03:22:42.878196   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I1004 03:22:42.878206   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:42.878215   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:42.878219   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:42.882365   73775 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:22:42.883651   73775 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1004 03:22:42.883692   73775 node_conditions.go:123] node cpu capacity is 2
	I1004 03:22:42.883705   73775 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1004 03:22:42.883732   73775 node_conditions.go:123] node cpu capacity is 2
	I1004 03:22:42.883744   73775 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1004 03:22:42.883750   73775 node_conditions.go:123] node cpu capacity is 2
	I1004 03:22:42.883756   73775 node_conditions.go:105] duration metric: took 5.636226ms to run NodePressure ...
	I1004 03:22:42.883772   73775 start.go:241] waiting for startup goroutines ...
	I1004 03:22:42.883806   73775 start.go:255] writing updated cluster config ...
	I1004 03:22:42.886913   73775 out.go:201] 
	I1004 03:22:42.889716   73775 config.go:182] Loaded profile config "ha-481241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:22:42.889836   73775 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/config.json ...
	I1004 03:22:42.892950   73775 out.go:177] * Starting "ha-481241-m04" worker node in "ha-481241" cluster
	I1004 03:22:42.896293   73775 cache.go:121] Beginning downloading kic base image for docker with crio
	I1004 03:22:42.899007   73775 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1004 03:22:42.901588   73775 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:22:42.901617   73775 cache.go:56] Caching tarball of preloaded images
	I1004 03:22:42.901680   73775 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1004 03:22:42.901771   73775 preload.go:172] Found /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1004 03:22:42.901783   73775 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:22:42.901926   73775 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/config.json ...
	I1004 03:22:42.919914   73775 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1004 03:22:42.919936   73775 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1004 03:22:42.919950   73775 cache.go:194] Successfully downloaded all kic artifacts
	I1004 03:22:42.919975   73775 start.go:360] acquireMachinesLock for ha-481241-m04: {Name:mkee5e8ac501923c80466eb5e8af9ed34ce281f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:22:42.920031   73775 start.go:364] duration metric: took 36.307µs to acquireMachinesLock for "ha-481241-m04"
	I1004 03:22:42.920055   73775 start.go:96] Skipping create...Using existing machine configuration
	I1004 03:22:42.920066   73775 fix.go:54] fixHost starting: m04
	I1004 03:22:42.920325   73775 cli_runner.go:164] Run: docker container inspect ha-481241-m04 --format={{.State.Status}}
	I1004 03:22:42.939657   73775 fix.go:112] recreateIfNeeded on ha-481241-m04: state=Stopped err=<nil>
	W1004 03:22:42.939689   73775 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 03:22:42.942515   73775 out.go:177] * Restarting existing docker container for "ha-481241-m04" ...
	I1004 03:22:42.944901   73775 cli_runner.go:164] Run: docker start ha-481241-m04
	I1004 03:22:43.277647   73775 cli_runner.go:164] Run: docker container inspect ha-481241-m04 --format={{.State.Status}}
	I1004 03:22:43.299407   73775 kic.go:430] container "ha-481241-m04" state is running.
	I1004 03:22:43.299767   73775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481241-m04
	I1004 03:22:43.324311   73775 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/config.json ...
	I1004 03:22:43.324578   73775 machine.go:93] provisionDockerMachine start ...
	I1004 03:22:43.324647   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241-m04
	I1004 03:22:43.360252   73775 main.go:141] libmachine: Using SSH client type: native
	I1004 03:22:43.360491   73775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1004 03:22:43.360505   73775 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 03:22:43.361086   73775 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1004 03:22:46.500636   73775 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481241-m04
	
	I1004 03:22:46.500661   73775 ubuntu.go:169] provisioning hostname "ha-481241-m04"
	I1004 03:22:46.500724   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241-m04
	I1004 03:22:46.519784   73775 main.go:141] libmachine: Using SSH client type: native
	I1004 03:22:46.520040   73775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1004 03:22:46.520060   73775 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481241-m04 && echo "ha-481241-m04" | sudo tee /etc/hostname
	I1004 03:22:46.666199   73775 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481241-m04
	
	I1004 03:22:46.666356   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241-m04
	I1004 03:22:46.708453   73775 main.go:141] libmachine: Using SSH client type: native
	I1004 03:22:46.708705   73775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1004 03:22:46.708724   73775 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481241-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481241-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481241-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:22:46.845133   73775 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:22:46.845162   73775 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19546-2238/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-2238/.minikube}
	I1004 03:22:46.845191   73775 ubuntu.go:177] setting up certificates
	I1004 03:22:46.845228   73775 provision.go:84] configureAuth start
	I1004 03:22:46.845290   73775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481241-m04
	I1004 03:22:46.861884   73775 provision.go:143] copyHostCerts
	I1004 03:22:46.861928   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19546-2238/.minikube/key.pem
	I1004 03:22:46.861961   73775 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-2238/.minikube/key.pem, removing ...
	I1004 03:22:46.861973   73775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-2238/.minikube/key.pem
	I1004 03:22:46.862052   73775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/key.pem (1679 bytes)
	I1004 03:22:46.862131   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19546-2238/.minikube/ca.pem
	I1004 03:22:46.862151   73775 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-2238/.minikube/ca.pem, removing ...
	I1004 03:22:46.862156   73775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.pem
	I1004 03:22:46.862184   73775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/ca.pem (1082 bytes)
	I1004 03:22:46.862226   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19546-2238/.minikube/cert.pem
	I1004 03:22:46.862246   73775 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-2238/.minikube/cert.pem, removing ...
	I1004 03:22:46.862254   73775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-2238/.minikube/cert.pem
	I1004 03:22:46.862281   73775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/cert.pem (1123 bytes)
	I1004 03:22:46.862331   73775 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem org=jenkins.ha-481241-m04 san=[127.0.0.1 192.168.49.5 ha-481241-m04 localhost minikube]
	I1004 03:22:47.247108   73775 provision.go:177] copyRemoteCerts
	I1004 03:22:47.247181   73775 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:22:47.247227   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241-m04
	I1004 03:22:47.265169   73775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/ha-481241-m04/id_rsa Username:docker}
	I1004 03:22:47.362554   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 03:22:47.362618   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:22:47.389013   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 03:22:47.389123   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1004 03:22:47.418410   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 03:22:47.418474   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 03:22:47.443660   73775 provision.go:87] duration metric: took 598.417661ms to configureAuth
	I1004 03:22:47.443686   73775 ubuntu.go:193] setting minikube options for container-runtime
	I1004 03:22:47.443935   73775 config.go:182] Loaded profile config "ha-481241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:22:47.444045   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241-m04
	I1004 03:22:47.462777   73775 main.go:141] libmachine: Using SSH client type: native
	I1004 03:22:47.463021   73775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1004 03:22:47.463041   73775 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:22:47.739062   73775 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:22:47.739082   73775 machine.go:96] duration metric: took 4.414485717s to provisionDockerMachine
	I1004 03:22:47.739093   73775 start.go:293] postStartSetup for "ha-481241-m04" (driver="docker")
	I1004 03:22:47.739104   73775 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:22:47.739168   73775 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:22:47.739210   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241-m04
	I1004 03:22:47.762158   73775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/ha-481241-m04/id_rsa Username:docker}
	I1004 03:22:47.863356   73775 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:22:47.867241   73775 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1004 03:22:47.867280   73775 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1004 03:22:47.867292   73775 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1004 03:22:47.867299   73775 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1004 03:22:47.867311   73775 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-2238/.minikube/addons for local assets ...
	I1004 03:22:47.867385   73775 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-2238/.minikube/files for local assets ...
	I1004 03:22:47.867467   73775 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem -> 75602.pem in /etc/ssl/certs
	I1004 03:22:47.867480   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem -> /etc/ssl/certs/75602.pem
	I1004 03:22:47.867586   73775 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:22:47.876902   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem --> /etc/ssl/certs/75602.pem (1708 bytes)
	I1004 03:22:47.905580   73775 start.go:296] duration metric: took 166.472301ms for postStartSetup
	I1004 03:22:47.905662   73775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 03:22:47.905713   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241-m04
	I1004 03:22:47.922534   73775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/ha-481241-m04/id_rsa Username:docker}
	I1004 03:22:48.014572   73775 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1004 03:22:48.019452   73775 fix.go:56] duration metric: took 5.099380106s for fixHost
	I1004 03:22:48.019479   73775 start.go:83] releasing machines lock for "ha-481241-m04", held for 5.099434439s
	I1004 03:22:48.019549   73775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481241-m04
	I1004 03:22:48.042066   73775 out.go:177] * Found network options:
	I1004 03:22:48.044929   73775 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1004 03:22:48.047667   73775 proxy.go:119] fail to check proxy env: Error ip not in block
	W1004 03:22:48.047697   73775 proxy.go:119] fail to check proxy env: Error ip not in block
	W1004 03:22:48.047721   73775 proxy.go:119] fail to check proxy env: Error ip not in block
	W1004 03:22:48.047731   73775 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 03:22:48.047803   73775 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:22:48.047845   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241-m04
	I1004 03:22:48.048126   73775 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:22:48.048179   73775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241-m04
	I1004 03:22:48.079036   73775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/ha-481241-m04/id_rsa Username:docker}
	I1004 03:22:48.096798   73775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/ha-481241-m04/id_rsa Username:docker}
	I1004 03:22:48.361311   73775 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1004 03:22:48.365995   73775 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:22:48.374822   73775 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1004 03:22:48.374899   73775 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:22:48.384061   73775 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1004 03:22:48.384083   73775 start.go:495] detecting cgroup driver to use...
	I1004 03:22:48.384115   73775 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1004 03:22:48.384176   73775 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:22:48.399966   73775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:22:48.416246   73775 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:22:48.416310   73775 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:22:48.435541   73775 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:22:48.449579   73775 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:22:48.550260   73775 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:22:48.657053   73775 docker.go:233] disabling docker service ...
	I1004 03:22:48.657122   73775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:22:48.670548   73775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:22:48.686103   73775 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:22:48.781810   73775 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:22:48.877926   73775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:22:48.891903   73775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:22:48.909296   73775 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:22:48.909397   73775 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:22:48.922541   73775 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:22:48.922620   73775 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:22:48.935005   73775 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:22:48.946714   73775 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:22:48.957965   73775 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:22:48.968039   73775 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:22:48.980194   73775 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:22:48.991463   73775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:22:49.002381   73775 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:22:49.011447   73775 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:22:49.021699   73775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:22:49.123659   73775 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:22:49.253877   73775 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:22:49.253950   73775 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:22:49.257880   73775 start.go:563] Will wait 60s for crictl version
	I1004 03:22:49.257945   73775 ssh_runner.go:195] Run: which crictl
	I1004 03:22:49.261311   73775 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:22:49.302608   73775 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1004 03:22:49.302761   73775 ssh_runner.go:195] Run: crio --version
	I1004 03:22:49.345993   73775 ssh_runner.go:195] Run: crio --version
	I1004 03:22:49.394494   73775 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1004 03:22:49.397092   73775 out.go:177]   - env NO_PROXY=192.168.49.2
	I1004 03:22:49.399713   73775 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1004 03:22:49.402382   73775 cli_runner.go:164] Run: docker network inspect ha-481241 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1004 03:22:49.417857   73775 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1004 03:22:49.421507   73775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:22:49.432882   73775 mustload.go:65] Loading cluster: ha-481241
	I1004 03:22:49.433124   73775 config.go:182] Loaded profile config "ha-481241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:22:49.433438   73775 cli_runner.go:164] Run: docker container inspect ha-481241 --format={{.State.Status}}
	I1004 03:22:49.459816   73775 host.go:66] Checking if "ha-481241" exists ...
	I1004 03:22:49.460187   73775 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241 for IP: 192.168.49.5
	I1004 03:22:49.460198   73775 certs.go:194] generating shared ca certs ...
	I1004 03:22:49.460215   73775 certs.go:226] acquiring lock for ca certs: {Name:mk468b07ab6620fd74cefc3667e1a8643008ce5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:22:49.460413   73775 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key
	I1004 03:22:49.460461   73775 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key
	I1004 03:22:49.460472   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 03:22:49.460484   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 03:22:49.460496   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 03:22:49.460508   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 03:22:49.460556   73775 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/7560.pem (1338 bytes)
	W1004 03:22:49.460598   73775 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-2238/.minikube/certs/7560_empty.pem, impossibly tiny 0 bytes
	I1004 03:22:49.460608   73775 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:22:49.460635   73775 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:22:49.460657   73775 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:22:49.460678   73775 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem (1679 bytes)
	I1004 03:22:49.460723   73775 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem (1708 bytes)
	I1004 03:22:49.460752   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem -> /usr/share/ca-certificates/75602.pem
	I1004 03:22:49.460764   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:22:49.460775   73775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/7560.pem -> /usr/share/ca-certificates/7560.pem
	I1004 03:22:49.460795   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:22:49.489805   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 03:22:49.516532   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:22:49.543511   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 03:22:49.570361   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem --> /usr/share/ca-certificates/75602.pem (1708 bytes)
	I1004 03:22:49.597055   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:22:49.622655   73775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/certs/7560.pem --> /usr/share/ca-certificates/7560.pem (1338 bytes)
	I1004 03:22:49.647491   73775 ssh_runner.go:195] Run: openssl version
	I1004 03:22:49.654085   73775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7560.pem && ln -fs /usr/share/ca-certificates/7560.pem /etc/ssl/certs/7560.pem"
	I1004 03:22:49.665110   73775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7560.pem
	I1004 03:22:49.668673   73775 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/7560.pem
	I1004 03:22:49.668774   73775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7560.pem
	I1004 03:22:49.677478   73775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7560.pem /etc/ssl/certs/51391683.0"
	I1004 03:22:49.686469   73775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75602.pem && ln -fs /usr/share/ca-certificates/75602.pem /etc/ssl/certs/75602.pem"
	I1004 03:22:49.696169   73775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75602.pem
	I1004 03:22:49.699816   73775 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/75602.pem
	I1004 03:22:49.699894   73775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75602.pem
	I1004 03:22:49.707289   73775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75602.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:22:49.716937   73775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:22:49.726859   73775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:22:49.730570   73775 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:48 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:22:49.730642   73775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:22:49.737744   73775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:22:49.746963   73775 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:22:49.750295   73775 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 03:22:49.750337   73775 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.1  false true} ...
	I1004 03:22:49.750421   73775 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481241-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-481241 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:22:49.750492   73775 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:22:49.761878   73775 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 03:22:49.761960   73775 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1004 03:22:49.770797   73775 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1004 03:22:49.790176   73775 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:22:49.809801   73775 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1004 03:22:49.813287   73775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:22:49.824517   73775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:22:49.916062   73775 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:22:49.930602   73775 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1004 03:22:49.931030   73775 config.go:182] Loaded profile config "ha-481241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:22:49.933354   73775 out.go:177] * Verifying Kubernetes components...
	I1004 03:22:49.935466   73775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:22:50.033849   73775 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:22:50.050202   73775 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19546-2238/kubeconfig
	I1004 03:22:50.050478   73775 kapi.go:59] client config for ha-481241: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/client.crt", KeyFile:"/home/jenkins/minikube-integration/19546-2238/.minikube/profiles/ha-481241/client.key", CAFile:"/home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a17550), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1004 03:22:50.050539   73775 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1004 03:22:50.050756   73775 node_ready.go:35] waiting up to 6m0s for node "ha-481241-m04" to be "Ready" ...
	I1004 03:22:50.050829   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m04
	I1004 03:22:50.050841   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:50.050849   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:50.050855   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:50.053936   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:22:50.055186   73775 node_ready.go:49] node "ha-481241-m04" has status "Ready":"True"
	I1004 03:22:50.055212   73775 node_ready.go:38] duration metric: took 4.436325ms for node "ha-481241-m04" to be "Ready" ...
	I1004 03:22:50.055223   73775 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:22:50.055298   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1004 03:22:50.055311   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:50.055320   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:50.055325   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:50.061621   73775 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:22:50.072510   73775 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bmz2w" in "kube-system" namespace to be "Ready" ...
	I1004 03:22:50.073112   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bmz2w
	I1004 03:22:50.073149   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:50.073178   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:50.073202   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:50.076354   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:22:50.077289   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:50.077311   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:50.077320   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:50.077326   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:50.080176   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:50.080843   73775 pod_ready.go:93] pod "coredns-7c65d6cfc9-bmz2w" in "kube-system" namespace has status "Ready":"True"
	I1004 03:22:50.080866   73775 pod_ready.go:82] duration metric: took 8.018189ms for pod "coredns-7c65d6cfc9-bmz2w" in "kube-system" namespace to be "Ready" ...
	I1004 03:22:50.080879   73775 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-md2qq" in "kube-system" namespace to be "Ready" ...
	I1004 03:22:50.080983   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:22:50.080993   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:50.081002   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:50.081006   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:50.084052   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:22:50.085111   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:50.085132   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:50.085165   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:50.085178   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:50.087950   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:50.582013   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:22:50.582037   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:50.582047   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:50.582052   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:50.584963   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:50.585738   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:50.585756   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:50.585765   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:50.585770   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:50.588389   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:51.082081   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:22:51.082104   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:51.082115   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:51.082119   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:51.087675   73775 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1004 03:22:51.088538   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:51.088559   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:51.088569   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:51.088575   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:51.091602   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:22:51.581962   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:22:51.581984   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:51.581994   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:51.582000   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:51.585004   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:51.586069   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:51.586087   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:51.586095   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:51.586099   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:51.588762   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:52.081509   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:22:52.081534   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:52.081544   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:52.081553   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:52.084419   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:52.086309   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:52.086332   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:52.086342   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:52.086348   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:52.089316   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:52.090040   73775 pod_ready.go:103] pod "coredns-7c65d6cfc9-md2qq" in "kube-system" namespace has status "Ready":"False"
	I1004 03:22:52.581774   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:22:52.581797   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:52.581807   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:52.581812   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:52.584810   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:52.585618   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:52.585662   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:52.585703   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:52.585727   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:52.588370   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:53.081407   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:22:53.081429   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:53.081439   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:53.081445   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:53.084260   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:53.085040   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:53.085057   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:53.085069   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:53.085088   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:53.087688   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:53.581126   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:22:53.581150   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:53.581159   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:53.581165   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:53.583996   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:53.584930   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:53.584951   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:53.584960   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:53.584966   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:53.587625   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:54.081887   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:22:54.081907   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:54.081916   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:54.081921   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:54.084827   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:54.086561   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:54.086586   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:54.086596   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:54.086600   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:54.089387   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:54.090105   73775 pod_ready.go:103] pod "coredns-7c65d6cfc9-md2qq" in "kube-system" namespace has status "Ready":"False"
	I1004 03:22:54.581117   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:22:54.581141   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:54.581150   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:54.581155   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:54.584031   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:54.584853   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:54.584873   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:54.584882   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:54.584886   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:54.587377   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:55.081282   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:22:55.081307   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:55.081321   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:55.081325   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:55.087661   73775 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:22:55.089728   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:55.089749   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:55.089759   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:55.089763   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:55.111879   73775 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I1004 03:22:55.581973   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:22:55.581999   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:55.582009   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:55.582014   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:55.584722   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:55.585561   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:55.585579   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:55.585588   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:55.585593   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:55.588148   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:56.081123   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:22:56.081146   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:56.081157   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:56.081163   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:56.084055   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:56.084974   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:56.084993   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:56.085003   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:56.085008   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:56.087749   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:56.581690   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:22:56.581715   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:56.581725   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:56.581731   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:56.589772   73775 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1004 03:22:56.590719   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:56.590741   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:56.590751   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:56.590756   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:56.597309   73775 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:22:56.598045   73775 pod_ready.go:103] pod "coredns-7c65d6cfc9-md2qq" in "kube-system" namespace has status "Ready":"False"
	I1004 03:22:57.081803   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:22:57.081827   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:57.081836   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:57.081842   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:57.084596   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:57.085426   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:57.085448   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:57.085458   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:57.085463   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:57.088208   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:57.581994   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:22:57.582018   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:57.582028   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:57.582033   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:57.584910   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:57.586119   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:57.586139   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:57.586147   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:57.586151   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:57.588774   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:58.081882   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:22:58.081903   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:58.081912   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:58.081917   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:58.085028   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:22:58.085824   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:58.085848   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:58.085861   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:58.085867   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:58.088567   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:58.581173   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:22:58.581194   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:58.581203   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:58.581231   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:58.583939   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:58.584806   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:58.584830   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:58.584840   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:58.584846   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:58.587311   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:59.081495   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:22:59.081519   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:59.081529   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:59.081534   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:59.084314   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:59.085157   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:59.085175   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:59.085184   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:59.085190   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:59.087733   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:59.088539   73775 pod_ready.go:103] pod "coredns-7c65d6cfc9-md2qq" in "kube-system" namespace has status "Ready":"False"
	I1004 03:22:59.581850   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:22:59.581873   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:59.581882   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:59.581889   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:59.584627   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:22:59.585372   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:22:59.585391   73775 round_trippers.go:469] Request Headers:
	I1004 03:22:59.585402   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:22:59.585405   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:22:59.587976   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:00.081370   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:23:00.081401   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:00.081412   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:00.081418   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:00.085347   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:23:00.086688   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:00.086758   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:00.086780   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:00.086808   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:00.091697   73775 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:23:00.581174   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:23:00.581199   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:00.581230   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:00.581237   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:00.583925   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:00.584771   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:00.584794   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:00.584804   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:00.584808   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:00.587210   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:01.081162   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:23:01.081182   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:01.081192   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:01.081196   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:01.084278   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:23:01.085117   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:01.085141   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:01.085151   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:01.085155   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:01.087690   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:01.581503   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:23:01.581523   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:01.581532   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:01.581536   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:01.593521   73775 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1004 03:23:01.594284   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:01.594300   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:01.594309   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:01.594314   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:01.602613   73775 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1004 03:23:01.603312   73775 pod_ready.go:103] pod "coredns-7c65d6cfc9-md2qq" in "kube-system" namespace has status "Ready":"False"
	I1004 03:23:02.082067   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:23:02.082091   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:02.082100   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:02.082105   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:02.085050   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:02.085856   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:02.085869   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:02.085878   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:02.085884   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:02.088591   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:02.581366   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:23:02.581390   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:02.581399   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:02.581405   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:02.584321   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:02.584994   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:02.585012   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:02.585022   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:02.585027   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:02.587636   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:03.081351   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:23:03.081379   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:03.081389   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:03.081395   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:03.084272   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:03.084985   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:03.085005   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:03.085014   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:03.085021   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:03.087765   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:03.581361   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:23:03.581383   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:03.581392   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:03.581397   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:03.586760   73775 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1004 03:23:03.587680   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:03.587706   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:03.587714   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:03.587718   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:03.590791   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:23:04.081509   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:23:04.081533   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:04.081543   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:04.081548   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:04.085969   73775 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:23:04.087141   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:04.087158   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:04.087167   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:04.087173   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:04.089830   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:04.090500   73775 pod_ready.go:103] pod "coredns-7c65d6cfc9-md2qq" in "kube-system" namespace has status "Ready":"False"
	I1004 03:23:04.581412   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:23:04.581437   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:04.581448   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:04.581459   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:04.584143   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:04.585052   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:04.585073   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:04.585083   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:04.585087   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:04.587583   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:05.081820   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:23:05.081847   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:05.081860   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:05.081864   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:05.084939   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:23:05.085813   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:05.085833   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:05.085844   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:05.085849   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:05.088661   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:05.581912   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:23:05.581932   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:05.581942   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:05.581948   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:05.584929   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:05.585762   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:05.585781   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:05.585791   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:05.585797   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:05.588542   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:06.081742   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:23:06.081765   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:06.081775   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:06.081786   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:06.084798   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:06.085596   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:06.085644   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:06.085669   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:06.085679   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:06.088425   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:06.581234   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:23:06.581258   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:06.581268   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:06.581271   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:06.584091   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:06.585134   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:06.585156   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:06.585165   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:06.585171   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:06.593042   73775 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 03:23:06.593991   73775 pod_ready.go:103] pod "coredns-7c65d6cfc9-md2qq" in "kube-system" namespace has status "Ready":"False"
	I1004 03:23:07.081760   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-md2qq
	I1004 03:23:07.081786   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:07.081797   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:07.081803   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:07.087878   73775 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:23:07.088774   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:07.088794   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:07.088804   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:07.088808   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:07.091453   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:07.092103   73775 pod_ready.go:98] node "ha-481241" hosting pod "coredns-7c65d6cfc9-md2qq" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-481241" has status "Ready":"Unknown"
	I1004 03:23:07.092128   73775 pod_ready.go:82] duration metric: took 17.011219961s for pod "coredns-7c65d6cfc9-md2qq" in "kube-system" namespace to be "Ready" ...
	E1004 03:23:07.092138   73775 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-481241" hosting pod "coredns-7c65d6cfc9-md2qq" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-481241" has status "Ready":"Unknown"
	I1004 03:23:07.092145   73775 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-481241" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:07.092210   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-481241
	I1004 03:23:07.092221   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:07.092229   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:07.092234   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:07.095053   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:07.095912   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:07.095932   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:07.095941   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:07.095947   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:07.098678   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:07.099522   73775 pod_ready.go:98] node "ha-481241" hosting pod "etcd-ha-481241" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-481241" has status "Ready":"Unknown"
	I1004 03:23:07.099577   73775 pod_ready.go:82] duration metric: took 7.424261ms for pod "etcd-ha-481241" in "kube-system" namespace to be "Ready" ...
	E1004 03:23:07.099603   73775 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-481241" hosting pod "etcd-ha-481241" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-481241" has status "Ready":"Unknown"
	I1004 03:23:07.099635   73775 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-481241-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:07.099724   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-481241-m02
	I1004 03:23:07.099752   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:07.099775   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:07.099807   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:07.104075   73775 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:23:07.111029   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:23:07.111811   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:07.112713   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:07.112762   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:07.121105   73775 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1004 03:23:07.121818   73775 pod_ready.go:93] pod "etcd-ha-481241-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:23:07.121867   73775 pod_ready.go:82] duration metric: took 22.205856ms for pod "etcd-ha-481241-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:07.121914   73775 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-481241-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:07.122016   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-481241-m03
	I1004 03:23:07.122049   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:07.122071   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:07.122092   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:07.125396   73775 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I1004 03:23:07.125584   73775 pod_ready.go:98] error getting pod "etcd-ha-481241-m03" in "kube-system" namespace (skipping!): pods "etcd-ha-481241-m03" not found
	I1004 03:23:07.125631   73775 pod_ready.go:82] duration metric: took 3.696701ms for pod "etcd-ha-481241-m03" in "kube-system" namespace to be "Ready" ...
	E1004 03:23:07.125657   73775 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "etcd-ha-481241-m03" in "kube-system" namespace (skipping!): pods "etcd-ha-481241-m03" not found
	I1004 03:23:07.125706   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-481241" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:07.125837   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-481241
	I1004 03:23:07.125870   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:07.125892   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:07.125914   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:07.132506   73775 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:23:07.133360   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:07.133413   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:07.133436   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:07.133457   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:07.136368   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:07.137352   73775 pod_ready.go:98] node "ha-481241" hosting pod "kube-apiserver-ha-481241" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-481241" has status "Ready":"Unknown"
	I1004 03:23:07.137413   73775 pod_ready.go:82] duration metric: took 11.666481ms for pod "kube-apiserver-ha-481241" in "kube-system" namespace to be "Ready" ...
	E1004 03:23:07.137439   73775 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-481241" hosting pod "kube-apiserver-ha-481241" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-481241" has status "Ready":"Unknown"
	I1004 03:23:07.137460   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-481241-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:07.137573   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-481241-m02
	I1004 03:23:07.137598   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:07.137620   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:07.137660   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:07.140652   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:07.282026   73775 request.go:632] Waited for 140.267093ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:23:07.282143   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:23:07.282182   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:07.282208   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:07.282231   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:07.285245   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:07.286326   73775 pod_ready.go:93] pod "kube-apiserver-ha-481241-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:23:07.286387   73775 pod_ready.go:82] duration metric: took 148.891616ms for pod "kube-apiserver-ha-481241-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:07.286413   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-481241-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:07.481768   73775 request.go:632] Waited for 195.25416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-481241-m03
	I1004 03:23:07.481859   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-481241-m03
	I1004 03:23:07.481874   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:07.481883   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:07.481888   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:07.484558   73775 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1004 03:23:07.484709   73775 pod_ready.go:98] error getting pod "kube-apiserver-ha-481241-m03" in "kube-system" namespace (skipping!): pods "kube-apiserver-ha-481241-m03" not found
	I1004 03:23:07.484727   73775 pod_ready.go:82] duration metric: took 198.291293ms for pod "kube-apiserver-ha-481241-m03" in "kube-system" namespace to be "Ready" ...
	E1004 03:23:07.484738   73775 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-ha-481241-m03" in "kube-system" namespace (skipping!): pods "kube-apiserver-ha-481241-m03" not found
	I1004 03:23:07.484750   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-481241" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:07.682172   73775 request.go:632] Waited for 197.340234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-481241
	I1004 03:23:07.682235   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-481241
	I1004 03:23:07.682243   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:07.682252   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:07.682257   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:07.685058   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:07.882777   73775 request.go:632] Waited for 196.793894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:07.882837   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:07.882850   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:07.882858   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:07.882866   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:07.885664   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:07.886272   73775 pod_ready.go:98] node "ha-481241" hosting pod "kube-controller-manager-ha-481241" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-481241" has status "Ready":"Unknown"
	I1004 03:23:07.886294   73775 pod_ready.go:82] duration metric: took 401.532289ms for pod "kube-controller-manager-ha-481241" in "kube-system" namespace to be "Ready" ...
	E1004 03:23:07.886304   73775 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-481241" hosting pod "kube-controller-manager-ha-481241" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-481241" has status "Ready":"Unknown"
	I1004 03:23:07.886312   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-481241-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:08.081976   73775 request.go:632] Waited for 195.592107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-481241-m02
	I1004 03:23:08.082065   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-481241-m02
	I1004 03:23:08.082090   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:08.082104   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:08.082109   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:08.087476   73775 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1004 03:23:08.282365   73775 request.go:632] Waited for 194.133511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:23:08.282420   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:23:08.282426   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:08.282435   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:08.282442   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:08.286038   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:23:08.286618   73775 pod_ready.go:93] pod "kube-controller-manager-ha-481241-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:23:08.286639   73775 pod_ready.go:82] duration metric: took 400.315085ms for pod "kube-controller-manager-ha-481241-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:08.286651   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-481241-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:08.482570   73775 request.go:632] Waited for 195.840818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-481241-m03
	I1004 03:23:08.482634   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-481241-m03
	I1004 03:23:08.482643   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:08.482653   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:08.482659   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:08.485107   73775 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1004 03:23:08.485382   73775 pod_ready.go:98] error getting pod "kube-controller-manager-ha-481241-m03" in "kube-system" namespace (skipping!): pods "kube-controller-manager-ha-481241-m03" not found
	I1004 03:23:08.485404   73775 pod_ready.go:82] duration metric: took 198.745481ms for pod "kube-controller-manager-ha-481241-m03" in "kube-system" namespace to be "Ready" ...
	E1004 03:23:08.485416   73775 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-ha-481241-m03" in "kube-system" namespace (skipping!): pods "kube-controller-manager-ha-481241-m03" not found
	I1004 03:23:08.485432   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-25pr9" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:08.681909   73775 request.go:632] Waited for 196.406972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25pr9
	I1004 03:23:08.681982   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25pr9
	I1004 03:23:08.681987   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:08.681996   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:08.682002   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:08.684473   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:08.882607   73775 request.go:632] Waited for 197.31854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241-m04
	I1004 03:23:08.882668   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m04
	I1004 03:23:08.882675   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:08.882684   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:08.882693   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:08.885482   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:08.886303   73775 pod_ready.go:93] pod "kube-proxy-25pr9" in "kube-system" namespace has status "Ready":"True"
	I1004 03:23:08.886325   73775 pod_ready.go:82] duration metric: took 400.885251ms for pod "kube-proxy-25pr9" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:08.886338   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7hdvx" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:09.082634   73775 request.go:632] Waited for 196.229423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7hdvx
	I1004 03:23:09.082720   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7hdvx
	I1004 03:23:09.082732   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:09.082742   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:09.082749   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:09.085995   73775 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I1004 03:23:09.086142   73775 pod_ready.go:98] error getting pod "kube-proxy-7hdvx" in "kube-system" namespace (skipping!): pods "kube-proxy-7hdvx" not found
	I1004 03:23:09.086160   73775 pod_ready.go:82] duration metric: took 199.815029ms for pod "kube-proxy-7hdvx" in "kube-system" namespace to be "Ready" ...
	E1004 03:23:09.086176   73775 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-proxy-7hdvx" in "kube-system" namespace (skipping!): pods "kube-proxy-7hdvx" not found
	I1004 03:23:09.086191   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9dn8z" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:09.282620   73775 request.go:632] Waited for 196.342916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9dn8z
	I1004 03:23:09.282719   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9dn8z
	I1004 03:23:09.282737   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:09.282748   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:09.282754   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:09.296192   73775 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1004 03:23:09.482228   73775 request.go:632] Waited for 185.351912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:09.482284   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:09.482311   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:09.482324   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:09.482329   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:09.485035   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:09.485846   73775 pod_ready.go:98] node "ha-481241" hosting pod "kube-proxy-9dn8z" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-481241" has status "Ready":"Unknown"
	I1004 03:23:09.485875   73775 pod_ready.go:82] duration metric: took 399.674768ms for pod "kube-proxy-9dn8z" in "kube-system" namespace to be "Ready" ...
	E1004 03:23:09.485904   73775 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-481241" hosting pod "kube-proxy-9dn8z" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-481241" has status "Ready":"Unknown"
	I1004 03:23:09.485918   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-q9kvx" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:09.682747   73775 request.go:632] Waited for 196.73308ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q9kvx
	I1004 03:23:09.682808   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q9kvx
	I1004 03:23:09.682817   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:09.682827   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:09.682834   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:09.685717   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:09.882515   73775 request.go:632] Waited for 196.127173ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:23:09.882577   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:23:09.882587   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:09.882597   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:09.882604   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:09.885376   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:09.886012   73775 pod_ready.go:93] pod "kube-proxy-q9kvx" in "kube-system" namespace has status "Ready":"True"
	I1004 03:23:09.886033   73775 pod_ready.go:82] duration metric: took 400.100419ms for pod "kube-proxy-q9kvx" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:09.886045   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-481241" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:10.081984   73775 request.go:632] Waited for 195.871341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-481241
	I1004 03:23:10.082044   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-481241
	I1004 03:23:10.082055   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:10.082064   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:10.082076   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:10.084829   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:10.281918   73775 request.go:632] Waited for 196.270744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:10.281980   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241
	I1004 03:23:10.281987   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:10.282013   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:10.282024   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:10.284676   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:10.285258   73775 pod_ready.go:98] node "ha-481241" hosting pod "kube-scheduler-ha-481241" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-481241" has status "Ready":"Unknown"
	I1004 03:23:10.285283   73775 pod_ready.go:82] duration metric: took 399.229933ms for pod "kube-scheduler-ha-481241" in "kube-system" namespace to be "Ready" ...
	E1004 03:23:10.285293   73775 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-481241" hosting pod "kube-scheduler-ha-481241" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-481241" has status "Ready":"Unknown"
	I1004 03:23:10.285301   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-481241-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:10.482650   73775 request.go:632] Waited for 197.279731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-481241-m02
	I1004 03:23:10.482729   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-481241-m02
	I1004 03:23:10.482760   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:10.482776   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:10.482784   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:10.485581   73775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:23:10.689973   73775 request.go:632] Waited for 203.619848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:23:10.690041   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-481241-m02
	I1004 03:23:10.690051   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:10.690060   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:10.690068   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:10.694930   73775 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:23:10.695576   73775 pod_ready.go:93] pod "kube-scheduler-ha-481241-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:23:10.695600   73775 pod_ready.go:82] duration metric: took 410.290852ms for pod "kube-scheduler-ha-481241-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:10.695613   73775 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-481241-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:23:10.881808   73775 request.go:632] Waited for 186.127032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-481241-m03
	I1004 03:23:10.881871   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-481241-m03
	I1004 03:23:10.881880   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:10.881889   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:10.881901   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:10.884707   73775 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1004 03:23:10.884831   73775 pod_ready.go:98] error getting pod "kube-scheduler-ha-481241-m03" in "kube-system" namespace (skipping!): pods "kube-scheduler-ha-481241-m03" not found
	I1004 03:23:10.884850   73775 pod_ready.go:82] duration metric: took 189.229263ms for pod "kube-scheduler-ha-481241-m03" in "kube-system" namespace to be "Ready" ...
	E1004 03:23:10.884864   73775 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-ha-481241-m03" in "kube-system" namespace (skipping!): pods "kube-scheduler-ha-481241-m03" not found
	I1004 03:23:10.884873   73775 pod_ready.go:39] duration metric: took 20.829640398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:23:10.884891   73775 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 03:23:10.884951   73775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:23:10.896612   73775 system_svc.go:56] duration metric: took 11.711551ms WaitForService to wait for kubelet
	I1004 03:23:10.896642   73775 kubeadm.go:582] duration metric: took 20.965986271s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:23:10.896663   73775 node_conditions.go:102] verifying NodePressure condition ...
	I1004 03:23:11.082058   73775 request.go:632] Waited for 185.32518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1004 03:23:11.082145   73775 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I1004 03:23:11.082159   73775 round_trippers.go:469] Request Headers:
	I1004 03:23:11.082168   73775 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:23:11.082175   73775 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1004 03:23:11.085254   73775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:23:11.086691   73775 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1004 03:23:11.086716   73775 node_conditions.go:123] node cpu capacity is 2
	I1004 03:23:11.086727   73775 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1004 03:23:11.086733   73775 node_conditions.go:123] node cpu capacity is 2
	I1004 03:23:11.086738   73775 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1004 03:23:11.086742   73775 node_conditions.go:123] node cpu capacity is 2
	I1004 03:23:11.086747   73775 node_conditions.go:105] duration metric: took 190.078875ms to run NodePressure ...
	I1004 03:23:11.086762   73775 start.go:241] waiting for startup goroutines ...
	I1004 03:23:11.086787   73775 start.go:255] writing updated cluster config ...
	I1004 03:23:11.087114   73775 ssh_runner.go:195] Run: rm -f paused
	I1004 03:23:11.152464   73775 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 03:23:11.155884   73775 out.go:177] * Done! kubectl is now configured to use "ha-481241" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 04 03:22:36 ha-481241 crio[644]: time="2024-10-04 03:22:36.717892537Z" level=info msg="Started container" PID=1840 containerID=7fa9401acf819f84fa33227769b488b10fc706315ff711f587e13381239bdc7d description=kube-system/kube-controller-manager-ha-481241/kube-controller-manager id=702c4c9e-5b5d-4971-bd98-df1f0a091563 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6863a319a01925086d1b60fd4dc5f0a7cbb2373e64df3dd4f18b92a72ce8ccfa
	Oct 04 03:22:46 ha-481241 conmon[1440]: conmon d3c72fb2bf5a3385847b <ninfo>: container 1459 exited with status 1
	Oct 04 03:22:46 ha-481241 crio[644]: time="2024-10-04 03:22:46.907414738Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=86922602-6d84-45f2-ba57-fdc3bf2a9b57 name=/runtime.v1.ImageService/ImageStatus
	Oct 04 03:22:46 ha-481241 crio[644]: time="2024-10-04 03:22:46.907637372Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=86922602-6d84-45f2-ba57-fdc3bf2a9b57 name=/runtime.v1.ImageService/ImageStatus
	Oct 04 03:22:46 ha-481241 crio[644]: time="2024-10-04 03:22:46.909734975Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=68444718-8e1c-4a39-9061-683907b6e76e name=/runtime.v1.ImageService/ImageStatus
	Oct 04 03:22:46 ha-481241 crio[644]: time="2024-10-04 03:22:46.911151528Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=68444718-8e1c-4a39-9061-683907b6e76e name=/runtime.v1.ImageService/ImageStatus
	Oct 04 03:22:46 ha-481241 crio[644]: time="2024-10-04 03:22:46.912167079Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=09c7fa14-b88e-408f-842f-32c712704721 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 04 03:22:46 ha-481241 crio[644]: time="2024-10-04 03:22:46.912261567Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 04 03:22:46 ha-481241 crio[644]: time="2024-10-04 03:22:46.930654350Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f36048c0fda470849501477118e97ac7142ee35462159d401c7affa952be7c3e/merged/etc/passwd: no such file or directory"
	Oct 04 03:22:46 ha-481241 crio[644]: time="2024-10-04 03:22:46.930709447Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f36048c0fda470849501477118e97ac7142ee35462159d401c7affa952be7c3e/merged/etc/group: no such file or directory"
	Oct 04 03:22:46 ha-481241 crio[644]: time="2024-10-04 03:22:46.990209315Z" level=info msg="Created container e746206d11b10547e7f4ca851e0f0bd9539d5f45e160626baa1f8b594317d345: kube-system/storage-provisioner/storage-provisioner" id=09c7fa14-b88e-408f-842f-32c712704721 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 04 03:22:46 ha-481241 crio[644]: time="2024-10-04 03:22:46.990900603Z" level=info msg="Starting container: e746206d11b10547e7f4ca851e0f0bd9539d5f45e160626baa1f8b594317d345" id=ceef5281-5a3b-4139-bac4-8f34adb08c51 name=/runtime.v1.RuntimeService/StartContainer
	Oct 04 03:22:47 ha-481241 crio[644]: time="2024-10-04 03:22:47.002298936Z" level=info msg="Started container" PID=1887 containerID=e746206d11b10547e7f4ca851e0f0bd9539d5f45e160626baa1f8b594317d345 description=kube-system/storage-provisioner/storage-provisioner id=ceef5281-5a3b-4139-bac4-8f34adb08c51 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4975423b9b815efc2d57cddb8763376cda6da40710052efd9408a2982e982836
	Oct 04 03:22:56 ha-481241 crio[644]: time="2024-10-04 03:22:56.198030050Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Oct 04 03:22:56 ha-481241 crio[644]: time="2024-10-04 03:22:56.203966127Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 04 03:22:56 ha-481241 crio[644]: time="2024-10-04 03:22:56.204002451Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 04 03:22:56 ha-481241 crio[644]: time="2024-10-04 03:22:56.204027542Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Oct 04 03:22:56 ha-481241 crio[644]: time="2024-10-04 03:22:56.207850050Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 04 03:22:56 ha-481241 crio[644]: time="2024-10-04 03:22:56.207885094Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 04 03:22:56 ha-481241 crio[644]: time="2024-10-04 03:22:56.207905639Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Oct 04 03:22:56 ha-481241 crio[644]: time="2024-10-04 03:22:56.211223876Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 04 03:22:56 ha-481241 crio[644]: time="2024-10-04 03:22:56.211257262Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 04 03:22:56 ha-481241 crio[644]: time="2024-10-04 03:22:56.211273705Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Oct 04 03:22:56 ha-481241 crio[644]: time="2024-10-04 03:22:56.214355892Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 04 03:22:56 ha-481241 crio[644]: time="2024-10-04 03:22:56.214388983Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e746206d11b10       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   26 seconds ago       Running             storage-provisioner       3                   4975423b9b815       storage-provisioner
	7fa9401acf819       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   36 seconds ago       Running             kube-controller-manager   8                   6863a319a0192       kube-controller-manager-ha-481241
	891f609c34aea       4eadde00b6c50b581474eaa28b09bfcdd974ccaab8bafac22b08e7d2ecd66fc1   38 seconds ago       Running             kube-vip                  3                   34ad728b0f8a6       kube-vip-ha-481241
	8708cccbf664a       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   43 seconds ago       Running             kube-apiserver            4                   417529bbeb5a7       kube-apiserver-ha-481241
	9e50b07f08c6f       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   54 seconds ago       Running             coredns                   2                   186a099fab41c       coredns-7c65d6cfc9-bmz2w
	f0707086683c4       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   54 seconds ago       Running             busybox                   2                   18c21d72e31d4       busybox-7dff88458-24zpz
	a943e40f95814       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d   56 seconds ago       Running             kube-proxy                2                   32e2a2d86d775       kube-proxy-9dn8z
	d3c72fb2bf5a3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   56 seconds ago       Exited              storage-provisioner       2                   4975423b9b815       storage-provisioner
	bd5f0f9fe9c1e       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51   57 seconds ago       Running             kindnet-cni               2                   c345c28fba897       kindnet-nvptn
	3e6d28406eef7       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   58 seconds ago       Running             coredns                   2                   9b9c397d3a03b       coredns-7c65d6cfc9-md2qq
	569cc873f796d       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   About a minute ago   Exited              kube-controller-manager   7                   6863a319a0192       kube-controller-manager-ha-481241
	14b41fff0edb1       4eadde00b6c50b581474eaa28b09bfcdd974ccaab8bafac22b08e7d2ecd66fc1   About a minute ago   Exited              kube-vip                  2                   34ad728b0f8a6       kube-vip-ha-481241
	e5411b9e8466f       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   About a minute ago   Exited              kube-apiserver            3                   417529bbeb5a7       kube-apiserver-ha-481241
	3be43c27a65fb       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da   About a minute ago   Running             etcd                      2                   c1222a3442806       etcd-ha-481241
	4da24825e9c47       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d   About a minute ago   Running             kube-scheduler            2                   ecaba73f5050f       kube-scheduler-ha-481241
	
	
	==> coredns [3e6d28406eef734be5662bbb34bd522341a0cfbfed8705118f45405e9433d949] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47589 - 64880 "HINFO IN 4176981943760721406.2795283289974823691. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023459538s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[289797965]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:22:14.764) (total time: 30001ms):
	Trace[289797965]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (03:22:44.765)
	Trace[289797965]: [30.001618563s] [30.001618563s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[279295850]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:22:14.764) (total time: 30001ms):
	Trace[279295850]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:22:44.765)
	Trace[279295850]: [30.001893678s] [30.001893678s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1439837179]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:22:14.764) (total time: 30001ms):
	Trace[1439837179]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:22:44.766)
	Trace[1439837179]: [30.001927482s] [30.001927482s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [9e50b07f08c6f29e8b389b6e2d97377cda798154fec3fa9df5d0dc8d446956aa] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37617 - 53411 "HINFO IN 2336928745228147784.8757485918339211026. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022024695s
	
	
	==> describe nodes <==
	Name:               ha-481241
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-481241
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-481241
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T03_12_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:12:04 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-481241
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:22:25 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 04 Oct 2024 03:21:54 +0000   Fri, 04 Oct 2024 03:23:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 04 Oct 2024 03:21:54 +0000   Fri, 04 Oct 2024 03:23:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 04 Oct 2024 03:21:54 +0000   Fri, 04 Oct 2024 03:23:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 04 Oct 2024 03:21:54 +0000   Fri, 04 Oct 2024 03:23:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-481241
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 71005900836843b187f7625113c5766e
	  System UUID:                75c38452-0a35-4f78-8ba0-34a54f5375b6
	  Boot ID:                    cc975b9c-d4f7-443e-a63b-68cdfd7ad286
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-24zpz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m48s
	  kube-system                 coredns-7c65d6cfc9-bmz2w             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 coredns-7c65d6cfc9-md2qq             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 etcd-ha-481241                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-nvptn                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-481241             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-481241    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-9dn8z                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-481241             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-481241                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 56s                    kube-proxy       
	  Normal   Starting                 4m41s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-481241 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-481241 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node ha-481241 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     11m                    kubelet          Node ha-481241 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  11m                    kubelet          Node ha-481241 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                    kubelet          Node ha-481241 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 11m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           11m                    node-controller  Node ha-481241 event: Registered Node ha-481241 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-481241 event: Registered Node ha-481241 in Controller
	  Normal   NodeReady                10m                    kubelet          Node ha-481241 status is now: NodeReady
	  Normal   RegisteredNode           9m28s                  node-controller  Node ha-481241 event: Registered Node ha-481241 in Controller
	  Normal   RegisteredNode           6m23s                  node-controller  Node ha-481241 event: Registered Node ha-481241 in Controller
	  Normal   NodeHasSufficientMemory  5m44s (x8 over 5m44s)  kubelet          Node ha-481241 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m44s (x8 over 5m44s)  kubelet          Node ha-481241 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m44s (x7 over 5m44s)  kubelet          Node ha-481241 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 5m44s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 5m44s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           5m7s                   node-controller  Node ha-481241 event: Registered Node ha-481241 in Controller
	  Normal   RegisteredNode           4m17s                  node-controller  Node ha-481241 event: Registered Node ha-481241 in Controller
	  Normal   RegisteredNode           3m42s                  node-controller  Node ha-481241 event: Registered Node ha-481241 in Controller
	  Normal   Starting                 117s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 117s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  117s (x8 over 117s)    kubelet          Node ha-481241 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    117s (x8 over 117s)    kubelet          Node ha-481241 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     117s (x7 over 117s)    kubelet          Node ha-481241 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node ha-481241 event: Registered Node ha-481241 in Controller
	  Normal   RegisteredNode           33s                    node-controller  Node ha-481241 event: Registered Node ha-481241 in Controller
	  Normal   NodeNotReady             7s                     node-controller  Node ha-481241 status is now: NodeNotReady
	
	
	Name:               ha-481241-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-481241-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-481241
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_04T03_12_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:12:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-481241-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:23:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:21:56 +0000   Fri, 04 Oct 2024 03:12:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:21:56 +0000   Fri, 04 Oct 2024 03:12:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:21:56 +0000   Fri, 04 Oct 2024 03:12:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:21:56 +0000   Fri, 04 Oct 2024 03:13:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-481241-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f290ef9cef64066a462998f06c1e722
	  System UUID:                810438ac-7ec9-4a4d-a319-2fac0563cdfb
	  Boot ID:                    cc975b9c-d4f7-443e-a63b-68cdfd7ad286
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fb8qp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m48s
	  kube-system                 etcd-ha-481241-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-2rz67                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-481241-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-481241-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-q9kvx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-481241-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-481241-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 6m45s                  kube-proxy       
	  Normal   Starting                 5m8s                   kube-proxy       
	  Normal   Starting                 69s                    kube-proxy       
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-481241-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-481241-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-481241-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                    node-controller  Node ha-481241-m02 event: Registered Node ha-481241-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-481241-m02 event: Registered Node ha-481241-m02 in Controller
	  Normal   RegisteredNode           9m28s                  node-controller  Node ha-481241-m02 event: Registered Node ha-481241-m02 in Controller
	  Normal   NodeHasSufficientPID     7m1s (x7 over 7m1s)    kubelet          Node ha-481241-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    7m1s (x8 over 7m1s)    kubelet          Node ha-481241-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 7m1s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m1s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m1s (x8 over 7m1s)    kubelet          Node ha-481241-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           6m23s                  node-controller  Node ha-481241-m02 event: Registered Node ha-481241-m02 in Controller
	  Normal   NodeHasSufficientMemory  5m42s (x8 over 5m42s)  kubelet          Node ha-481241-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     5m42s (x7 over 5m42s)  kubelet          Node ha-481241-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    5m42s (x8 over 5m42s)  kubelet          Node ha-481241-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 5m42s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m42s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           5m7s                   node-controller  Node ha-481241-m02 event: Registered Node ha-481241-m02 in Controller
	  Normal   RegisteredNode           4m17s                  node-controller  Node ha-481241-m02 event: Registered Node ha-481241-m02 in Controller
	  Normal   RegisteredNode           3m42s                  node-controller  Node ha-481241-m02 event: Registered Node ha-481241-m02 in Controller
	  Normal   Starting                 114s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 114s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  114s (x8 over 114s)    kubelet          Node ha-481241-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    114s (x8 over 114s)    kubelet          Node ha-481241-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     114s (x7 over 114s)    kubelet          Node ha-481241-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node ha-481241-m02 event: Registered Node ha-481241-m02 in Controller
	  Normal   RegisteredNode           33s                    node-controller  Node ha-481241-m02 event: Registered Node ha-481241-m02 in Controller
	
	
	Name:               ha-481241-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-481241-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-481241
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_04T03_14_49_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:14:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-481241-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:23:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:22:56 +0000   Fri, 04 Oct 2024 03:20:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:22:56 +0000   Fri, 04 Oct 2024 03:20:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:22:56 +0000   Fri, 04 Oct 2024 03:20:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:22:56 +0000   Fri, 04 Oct 2024 03:20:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-481241-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 1bdc1e255c6c46fd9916b114d80b4f85
	  System UUID:                c10ee74c-144e-4f25-bc1b-9641ca230128
	  Boot ID:                    cc975b9c-d4f7-443e-a63b-68cdfd7ad286
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8vn2j    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  kube-system                 kindnet-lbg2z              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m26s
	  kube-system                 kube-proxy-25pr9           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6s                     kube-proxy       
	  Normal   Starting                 8m23s                  kube-proxy       
	  Normal   Starting                 2m56s                  kube-proxy       
	  Warning  CgroupV1                 8m26s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   CIDRAssignmentFailed     8m26s                  cidrAllocator    Node ha-481241-m04 status is now: CIDRAssignmentFailed
	  Normal   Starting                 8m26s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  8m26s (x2 over 8m26s)  kubelet          Node ha-481241-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m26s (x2 over 8m26s)  kubelet          Node ha-481241-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m26s (x2 over 8m26s)  kubelet          Node ha-481241-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m25s                  node-controller  Node ha-481241-m04 event: Registered Node ha-481241-m04 in Controller
	  Normal   RegisteredNode           8m24s                  node-controller  Node ha-481241-m04 event: Registered Node ha-481241-m04 in Controller
	  Normal   RegisteredNode           8m22s                  node-controller  Node ha-481241-m04 event: Registered Node ha-481241-m04 in Controller
	  Normal   NodeReady                7m44s                  kubelet          Node ha-481241-m04 status is now: NodeReady
	  Normal   RegisteredNode           6m24s                  node-controller  Node ha-481241-m04 event: Registered Node ha-481241-m04 in Controller
	  Normal   RegisteredNode           5m8s                   node-controller  Node ha-481241-m04 event: Registered Node ha-481241-m04 in Controller
	  Normal   NodeNotReady             4m28s                  node-controller  Node ha-481241-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           4m18s                  node-controller  Node ha-481241-m04 event: Registered Node ha-481241-m04 in Controller
	  Normal   RegisteredNode           3m43s                  node-controller  Node ha-481241-m04 event: Registered Node ha-481241-m04 in Controller
	  Normal   Starting                 3m27s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m27s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     3m21s (x7 over 3m27s)  kubelet          Node ha-481241-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  3m14s (x8 over 3m27s)  kubelet          Node ha-481241-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m14s (x8 over 3m27s)  kubelet          Node ha-481241-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           53s                    node-controller  Node ha-481241-m04 event: Registered Node ha-481241-m04 in Controller
	  Normal   RegisteredNode           34s                    node-controller  Node ha-481241-m04 event: Registered Node ha-481241-m04 in Controller
	  Normal   Starting                 30s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 30s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     24s (x7 over 30s)      kubelet          Node ha-481241-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  18s (x8 over 30s)      kubelet          Node ha-481241-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18s (x8 over 30s)      kubelet          Node ha-481241-m04 status is now: NodeHasNoDiskPressure
	
	
	==> dmesg <==
	[Oct 4 02:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015570] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.529270] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.049348] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.015318] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.608453] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.834894] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 4 03:11] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [3be43c27a65fbd80ceb8a1778642bcba1ae73fdedbd1cfe9fe82e777107049ae] <==
	{"level":"warn","ts":"2024-10-04T03:21:48.601118Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T03:21:41.630198Z","time spent":"6.970909322s","remote":"127.0.0.1:48220","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:10000 "}
	{"level":"warn","ts":"2024-10-04T03:21:48.601160Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.009519232s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" limit:500 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-10-04T03:21:48.601180Z","caller":"traceutil/trace.go:171","msg":"trace[2048459129] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; }","duration":"7.009541467s","start":"2024-10-04T03:21:41.591635Z","end":"2024-10-04T03:21:48.601176Z","steps":["trace[2048459129] 'agreement among raft nodes before linearized reading'  (duration: 7.009519026s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T03:21:48.601196Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T03:21:41.591588Z","time spent":"7.009602242s","remote":"127.0.0.1:48176","response type":"/etcdserverpb.KV/Range","request count":0,"request size":63,"response count":0,"response size":0,"request content":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" limit:500 "}
	{"level":"warn","ts":"2024-10-04T03:21:48.601247Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.013725356s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" limit:500 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-10-04T03:21:48.601266Z","caller":"traceutil/trace.go:171","msg":"trace[387500144] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; }","duration":"7.013747649s","start":"2024-10-04T03:21:41.587514Z","end":"2024-10-04T03:21:48.601262Z","steps":["trace[387500144] 'agreement among raft nodes before linearized reading'  (duration: 7.013725463s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T03:21:48.601283Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T03:21:41.587473Z","time spent":"7.013803862s","remote":"127.0.0.1:48394","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" limit:500 "}
	{"level":"warn","ts":"2024-10-04T03:21:48.601326Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.025287468s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-10-04T03:21:48.601342Z","caller":"traceutil/trace.go:171","msg":"trace[2002309972] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; }","duration":"7.02530377s","start":"2024-10-04T03:21:41.576034Z","end":"2024-10-04T03:21:48.601337Z","steps":["trace[2002309972] 'agreement among raft nodes before linearized reading'  (duration: 7.025287656s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T03:21:48.601361Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T03:21:41.575995Z","time spent":"7.025362084s","remote":"127.0.0.1:48392","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":0,"response size":0,"request content":"key:\"/registry/priorityclasses/system-node-critical\" "}
	{"level":"warn","ts":"2024-10-04T03:21:48.601398Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.029526896s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-10-04T03:21:48.601419Z","caller":"traceutil/trace.go:171","msg":"trace[1292831822] range","detail":"{range_begin:/registry/clusterroles; range_end:; }","duration":"7.029565395s","start":"2024-10-04T03:21:41.571849Z","end":"2024-10-04T03:21:48.601414Z","steps":["trace[1292831822] 'agreement among raft nodes before linearized reading'  (duration: 7.029527332s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T03:21:48.601437Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T03:21:41.571809Z","time spent":"7.029621549s","remote":"127.0.0.1:48366","response type":"/etcdserverpb.KV/Range","request count":0,"request size":26,"response count":0,"response size":0,"request content":"key:\"/registry/clusterroles\" limit:1 "}
	{"level":"warn","ts":"2024-10-04T03:21:48.601517Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.090129969s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:500 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-10-04T03:21:48.601604Z","caller":"traceutil/trace.go:171","msg":"trace[1624095598] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; }","duration":"7.090218894s","start":"2024-10-04T03:21:41.511379Z","end":"2024-10-04T03:21:48.601598Z","steps":["trace[1624095598] 'agreement among raft nodes before linearized reading'  (duration: 7.090130174s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T03:21:48.601624Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T03:21:41.511329Z","time spent":"7.090287193s","remote":"127.0.0.1:48172","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:500 "}
	{"level":"warn","ts":"2024-10-04T03:21:48.601654Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.140120355s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" limit:500 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-10-04T03:21:48.601668Z","caller":"traceutil/trace.go:171","msg":"trace[1440842579] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; }","duration":"7.140136191s","start":"2024-10-04T03:21:41.461528Z","end":"2024-10-04T03:21:48.601664Z","steps":["trace[1440842579] 'agreement among raft nodes before linearized reading'  (duration: 7.140119871s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T03:21:48.601687Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T03:21:41.461475Z","time spent":"7.140207254s","remote":"127.0.0.1:48366","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" limit:500 "}
	{"level":"warn","ts":"2024-10-04T03:21:48.601706Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.779821575s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/apiserver-tf6t4q2mbmwy6tje6ogj4rxjzu\" ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-10-04T03:21:48.601722Z","caller":"traceutil/trace.go:171","msg":"trace[245231035] range","detail":"{range_begin:/registry/leases/kube-system/apiserver-tf6t4q2mbmwy6tje6ogj4rxjzu; range_end:; }","duration":"7.779837132s","start":"2024-10-04T03:21:40.821881Z","end":"2024-10-04T03:21:48.601718Z","steps":["trace[245231035] 'agreement among raft nodes before linearized reading'  (duration: 7.77982146s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T03:21:48.601739Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T03:21:40.821841Z","time spent":"7.779890579s","remote":"127.0.0.1:48278","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/leases/kube-system/apiserver-tf6t4q2mbmwy6tje6ogj4rxjzu\" "}
	{"level":"warn","ts":"2024-10-04T03:21:48.627845Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"987.035541ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-481241-m02\" ","response":"range_response_count:1 size:6102"}
	{"level":"info","ts":"2024-10-04T03:21:48.627904Z","caller":"traceutil/trace.go:171","msg":"trace[1213993043] range","detail":"{range_begin:/registry/minions/ha-481241-m02; range_end:; response_count:1; response_revision:2734; }","duration":"987.10608ms","start":"2024-10-04T03:21:47.640786Z","end":"2024-10-04T03:21:48.627892Z","steps":["trace[1213993043] 'agreement among raft nodes before linearized reading'  (duration: 986.95025ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T03:21:48.627933Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T03:21:47.640748Z","time spent":"987.179154ms","remote":"127.0.0.1:48198","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":6126,"request content":"key:\"/registry/minions/ha-481241-m02\" "}
	
	
	==> kernel <==
	 03:23:14 up  1:05,  0 users,  load average: 2.19, 2.34, 1.65
	Linux ha-481241 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [bd5f0f9fe9c1e4fd60699df6c04e7a22cb0fc0926d94c2fcff3731935b2e8bd4] <==
	Trace[1977712876]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (03:22:46.198)
	Trace[1977712876]: [30.001048831s] [30.001048831s] END
	E1004 03:22:46.198806       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	W1004 03:22:46.198867       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1004 03:22:46.198928       1 trace.go:236] Trace[2007715913]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (04-Oct-2024 03:22:16.198) (total time: 30000ms):
	Trace[2007715913]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (03:22:46.198)
	Trace[2007715913]: [30.00020337s] [30.00020337s] END
	E1004 03:22:46.198937       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1004 03:22:47.798840       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1004 03:22:47.798965       1 metrics.go:61] Registering metrics
	I1004 03:22:47.799191       1 controller.go:374] Syncing nftables rules
	I1004 03:22:56.197764       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:22:56.197802       1 main.go:299] handling current node
	I1004 03:22:56.202136       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I1004 03:22:56.202188       1 main.go:322] Node ha-481241-m02 has CIDR [10.244.1.0/24] 
	I1004 03:22:56.202401       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0} 
	I1004 03:22:56.202523       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I1004 03:22:56.202537       1 main.go:322] Node ha-481241-m04 has CIDR [10.244.3.0/24] 
	I1004 03:22:56.202600       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0} 
	I1004 03:23:06.201682       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 03:23:06.201837       1 main.go:299] handling current node
	I1004 03:23:06.201876       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I1004 03:23:06.201910       1 main.go:322] Node ha-481241-m02 has CIDR [10.244.1.0/24] 
	I1004 03:23:06.202114       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I1004 03:23:06.202140       1 main.go:322] Node ha-481241-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8708cccbf664a5e7f6b14ba0e23279994044ed4321214e1dcaac8cd32234a60a] <==
	I1004 03:22:34.702691       1 establishing_controller.go:81] Starting EstablishingController
	I1004 03:22:34.702722       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I1004 03:22:34.702744       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1004 03:22:34.702755       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1004 03:22:34.795830       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1004 03:22:34.795940       1 shared_informer.go:320] Caches are synced for configmaps
	I1004 03:22:34.795995       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1004 03:22:34.797567       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1004 03:22:34.797606       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 03:22:34.797614       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 03:22:34.798785       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1004 03:22:34.799014       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1004 03:22:34.799038       1 aggregator.go:171] initial CRD sync complete...
	I1004 03:22:34.799045       1 autoregister_controller.go:144] Starting autoregister controller
	I1004 03:22:34.799049       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1004 03:22:34.799053       1 cache.go:39] Caches are synced for autoregister controller
	I1004 03:22:34.806106       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1004 03:22:34.814856       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1004 03:22:34.820160       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1004 03:22:34.820188       1 policy_source.go:224] refreshing policies
	I1004 03:22:34.881915       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:22:35.304585       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1004 03:22:35.818221       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1004 03:22:35.819737       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:22:35.829466       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [e5411b9e8466f3953ae8273e10eb9226f6fed888828569bb88e00f3ee48b0aeb] <==
	E1004 03:21:48.622434       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: etcdserver: leader changed" logger="UnhandledError"
	W1004 03:21:48.622471       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: etcdserver: leader changed. Retrying...
	I1004 03:21:49.578429       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 03:21:50.071179       1 shared_informer.go:320] Caches are synced for configmaps
	I1004 03:21:50.670411       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1004 03:21:50.674580       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1004 03:21:50.683127       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1004 03:21:50.774104       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1004 03:21:51.074670       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 03:21:51.074726       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 03:21:51.254297       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1004 03:21:51.254411       1 policy_source.go:224] refreshing policies
	I1004 03:21:51.275963       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1004 03:21:51.276102       1 aggregator.go:171] initial CRD sync complete...
	I1004 03:21:51.276142       1 autoregister_controller.go:144] Starting autoregister controller
	I1004 03:21:51.276177       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1004 03:21:51.276207       1 cache.go:39] Caches are synced for autoregister controller
	I1004 03:21:51.347306       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1004 03:21:51.587231       1 cache.go:39] Caches are synced for RemoteAvailability controller
	W1004 03:21:51.680615       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1004 03:21:51.846516       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:21:51.884599       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:21:51.890938       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1004 03:21:51.893932       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	F1004 03:22:29.584425       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [569cc873f796d6c9cde7eed9a6d03de1dbd5c4ac2c103f967b60b300cc48fcca] <==
	I1004 03:22:04.327456       1 serving.go:386] Generated self-signed cert in-memory
	I1004 03:22:05.269811       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1004 03:22:05.269843       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:22:05.271467       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1004 03:22:05.271690       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1004 03:22:05.271869       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1004 03:22:05.271939       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1004 03:22:15.292071       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [7fa9401acf819f84fa33227769b488b10fc706315ff711f587e13381239bdc7d] <==
	I1004 03:22:40.911062       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 03:22:40.911091       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1004 03:22:40.950444       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 03:22:55.124534       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.14477ms"
	I1004 03:22:55.124708       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.409µs"
	I1004 03:22:56.143958       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.421µs"
	I1004 03:22:56.843414       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-481241-m04"
	E1004 03:23:00.262689       1 gc_controller.go:151] "Failed to get node" err="node \"ha-481241-m03\" not found" logger="pod-garbage-collector-controller" node="ha-481241-m03"
	E1004 03:23:00.262760       1 gc_controller.go:151] "Failed to get node" err="node \"ha-481241-m03\" not found" logger="pod-garbage-collector-controller" node="ha-481241-m03"
	E1004 03:23:00.262770       1 gc_controller.go:151] "Failed to get node" err="node \"ha-481241-m03\" not found" logger="pod-garbage-collector-controller" node="ha-481241-m03"
	E1004 03:23:00.262777       1 gc_controller.go:151] "Failed to get node" err="node \"ha-481241-m03\" not found" logger="pod-garbage-collector-controller" node="ha-481241-m03"
	E1004 03:23:00.262784       1 gc_controller.go:151] "Failed to get node" err="node \"ha-481241-m03\" not found" logger="pod-garbage-collector-controller" node="ha-481241-m03"
	I1004 03:23:06.611769       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-481241-m04"
	I1004 03:23:06.611879       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-481241"
	I1004 03:23:06.633223       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-481241"
	I1004 03:23:06.827363       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="19.380494ms"
	I1004 03:23:06.827784       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="158.66µs"
	I1004 03:23:06.902451       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="70.251µs"
	I1004 03:23:06.914206       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="114.87µs"
	I1004 03:23:06.903583       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-pc5zb EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-pc5zb\": the object has been modified; please apply your changes to the latest version and try again"
	I1004 03:23:06.903971       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"a27b42db-e89f-4eb3-aebd-6c3dbd00f2ac", APIVersion:"v1", ResourceVersion:"291", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-pc5zb EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-pc5zb": the object has been modified; please apply your changes to the latest version and try again
	I1004 03:23:08.219305       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.527945ms"
	I1004 03:23:08.219410       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.263µs"
	I1004 03:23:10.368904       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-481241"
	I1004 03:23:11.957550       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-481241"
	
	
	==> kube-proxy [a943e40f95814ed4a353d25cf07944a04f62eed1ca79daa34877b0765e30b53d] <==
	I1004 03:22:16.820153       1 server_linux.go:66] "Using iptables proxy"
	I1004 03:22:16.926821       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1004 03:22:16.926904       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:22:16.946933       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1004 03:22:16.946990       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:22:16.948689       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:22:16.949013       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:22:16.949035       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:22:16.950228       1 config.go:199] "Starting service config controller"
	I1004 03:22:16.950267       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:22:16.950291       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:22:16.950295       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:22:16.950767       1 config.go:328] "Starting node config controller"
	I1004 03:22:16.950784       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:22:17.050987       1 shared_informer.go:320] Caches are synced for node config
	I1004 03:22:17.051017       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:22:17.051036       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4da24825e9c4736f1d4e00864c0d2aa39d75f9a3a66b8334a5d802d2339ac0fb] <==
	E1004 03:21:49.043133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:21:49.459616       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 03:21:49.459746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:21:50.887150       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E1004 03:21:50.887349       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError"
	W1004 03:21:50.941265       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E1004 03:21:50.941363       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError"
	W1004 03:21:51.083323       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E1004 03:21:51.083493       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError"
	I1004 03:22:09.709092       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1004 03:22:34.689570       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:38216->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1004 03:22:34.689673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:38332->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1004 03:22:34.689731       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:38340->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1004 03:22:34.689797       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:38328->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1004 03:22:34.689853       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:38324->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1004 03:22:34.689909       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:38318->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1004 03:22:34.689966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:38302->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1004 03:22:34.690021       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:38300->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1004 03:22:34.690080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:38286->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1004 03:22:34.690127       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:38272->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1004 03:22:34.690175       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:38266->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1004 03:22:34.690224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:38254->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1004 03:22:34.690279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:38240->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1004 03:22:34.690338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:38228->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1004 03:22:34.736247       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:38210->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	
	
	==> kubelet <==
	Oct 04 03:22:16 ha-481241 kubelet[761]: E1004 03:22:16.684614     761 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012136684048134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:22:16 ha-481241 kubelet[761]: E1004 03:22:16.684655     761 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012136684048134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:22:21 ha-481241 kubelet[761]: I1004 03:22:21.640975     761 scope.go:117] "RemoveContainer" containerID="569cc873f796d6c9cde7eed9a6d03de1dbd5c4ac2c103f967b60b300cc48fcca"
	Oct 04 03:22:21 ha-481241 kubelet[761]: E1004 03:22:21.641173     761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-481241_kube-system(3806066f7a3c21d8a8d96ad36a17759a)\"" pod="kube-system/kube-controller-manager-ha-481241" podUID="3806066f7a3c21d8a8d96ad36a17759a"
	Oct 04 03:22:23 ha-481241 kubelet[761]: I1004 03:22:23.845589     761 scope.go:117] "RemoveContainer" containerID="569cc873f796d6c9cde7eed9a6d03de1dbd5c4ac2c103f967b60b300cc48fcca"
	Oct 04 03:22:23 ha-481241 kubelet[761]: E1004 03:22:23.845786     761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-481241_kube-system(3806066f7a3c21d8a8d96ad36a17759a)\"" pod="kube-system/kube-controller-manager-ha-481241" podUID="3806066f7a3c21d8a8d96ad36a17759a"
	Oct 04 03:22:26 ha-481241 kubelet[761]: E1004 03:22:26.685852     761 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012146685593117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:22:26 ha-481241 kubelet[761]: E1004 03:22:26.685887     761 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012146685593117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:22:29 ha-481241 kubelet[761]: I1004 03:22:29.863378     761 scope.go:117] "RemoveContainer" containerID="e5411b9e8466f3953ae8273e10eb9226f6fed888828569bb88e00f3ee48b0aeb"
	Oct 04 03:22:29 ha-481241 kubelet[761]: I1004 03:22:29.864666     761 status_manager.go:851] "Failed to get status for pod" podUID="ed6221333410679ab569b662598b0ee1" pod="kube-system/kube-apiserver-ha-481241" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-481241\": dial tcp 192.168.49.254:8443: connect: connection refused"
	Oct 04 03:22:29 ha-481241 kubelet[761]: E1004 03:22:29.865421     761 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-481241.17fb21d593c39649\": dial tcp 192.168.49.254:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-ha-481241.17fb21d593c39649  kube-system   2776 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-481241,UID:ed6221333410679ab569b662598b0ee1,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.31.1\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-481241,},FirstTimestamp:2024-10-04 03:21:23 +0000 UTC,LastTimestamp:2024-10-04 03:22:29.864566436 +0000 UTC m=+73.382766095,Count:2,Type:Normal,EventTime:0001-01-01 00:00
:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-481241,}"
	Oct 04 03:22:34 ha-481241 kubelet[761]: I1004 03:22:34.876791     761 scope.go:117] "RemoveContainer" containerID="14b41fff0edb1b597803adec7606dfaec3c0c061039b47d36a3eef0c3b6032e2"
	Oct 04 03:22:36 ha-481241 kubelet[761]: I1004 03:22:36.634863     761 scope.go:117] "RemoveContainer" containerID="569cc873f796d6c9cde7eed9a6d03de1dbd5c4ac2c103f967b60b300cc48fcca"
	Oct 04 03:22:36 ha-481241 kubelet[761]: E1004 03:22:36.687407     761 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012156686835341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:22:36 ha-481241 kubelet[761]: E1004 03:22:36.687447     761 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012156686835341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:22:45 ha-481241 kubelet[761]: E1004 03:22:45.458600     761 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481241?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Oct 04 03:22:46 ha-481241 kubelet[761]: E1004 03:22:46.695426     761 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012166694196984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:22:46 ha-481241 kubelet[761]: E1004 03:22:46.695870     761 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012166694196984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:22:46 ha-481241 kubelet[761]: I1004 03:22:46.905924     761 scope.go:117] "RemoveContainer" containerID="d3c72fb2bf5a3385847b60254b2d2ebbe0001c1bb933cf2d3a475ac501eea273"
	Oct 04 03:22:55 ha-481241 kubelet[761]: E1004 03:22:55.459315     761 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481241?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Oct 04 03:22:56 ha-481241 kubelet[761]: E1004 03:22:56.696855     761 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012176696675626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:22:56 ha-481241 kubelet[761]: E1004 03:22:56.696889     761 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012176696675626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:23:05 ha-481241 kubelet[761]: E1004 03:23:05.460124     761 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481241?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Oct 04 03:23:06 ha-481241 kubelet[761]: E1004 03:23:06.698109     761 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012186697640420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:23:06 ha-481241 kubelet[761]: E1004 03:23:06.698153     761 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012186697640420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-481241 -n ha-481241
helpers_test.go:261: (dbg) Run:  kubectl --context ha-481241 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (126.50s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (32.48s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-261592 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-261592 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.767631499s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-261592] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-2238/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-2238/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "pause-261592" primary control-plane node in "pause-261592" cluster
	* Pulling base image v0.0.45-1727731891-master ...
	* Updating the running docker "pause-261592" container ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-261592" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:47:40.982140  186480 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:47:40.982332  186480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:47:40.982345  186480 out.go:358] Setting ErrFile to fd 2...
	I1004 03:47:40.982351  186480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:47:40.982705  186480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-2238/.minikube/bin
	I1004 03:47:40.983078  186480 out.go:352] Setting JSON to false
	I1004 03:47:40.984062  186480 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5406,"bootTime":1728008255,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1004 03:47:40.984133  186480 start.go:139] virtualization:  
	I1004 03:47:40.987256  186480 out.go:177] * [pause-261592] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1004 03:47:40.990628  186480 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:47:40.990728  186480 notify.go:220] Checking for updates...
	I1004 03:47:40.995639  186480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:47:40.998289  186480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-2238/kubeconfig
	I1004 03:47:41.000921  186480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-2238/.minikube
	I1004 03:47:41.003461  186480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1004 03:47:41.006077  186480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:47:41.009080  186480 config.go:182] Loaded profile config "pause-261592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:47:41.009681  186480 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:47:41.036594  186480 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1004 03:47:41.036729  186480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:47:41.090052  186480 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:62 SystemTime:2024-10-04 03:47:41.080028097 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:47:41.090165  186480 docker.go:318] overlay module found
	I1004 03:47:41.092895  186480 out.go:177] * Using the docker driver based on existing profile
	I1004 03:47:41.095574  186480 start.go:297] selected driver: docker
	I1004 03:47:41.095590  186480 start.go:901] validating driver "docker" against &{Name:pause-261592 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-261592 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry
-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:47:41.095725  186480 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:47:41.095819  186480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:47:41.150615  186480 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:62 SystemTime:2024-10-04 03:47:41.140374301 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:47:41.151102  186480 cni.go:84] Creating CNI manager for ""
	I1004 03:47:41.151154  186480 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1004 03:47:41.151201  186480 start.go:340] cluster config:
	{Name:pause-261592 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-261592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage
-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:47:41.154133  186480 out.go:177] * Starting "pause-261592" primary control-plane node in "pause-261592" cluster
	I1004 03:47:41.156927  186480 cache.go:121] Beginning downloading kic base image for docker with crio
	I1004 03:47:41.159629  186480 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1004 03:47:41.162227  186480 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:47:41.162279  186480 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1004 03:47:41.162290  186480 cache.go:56] Caching tarball of preloaded images
	I1004 03:47:41.162320  186480 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1004 03:47:41.162372  186480 preload.go:172] Found /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1004 03:47:41.162382  186480 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:47:41.162519  186480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/pause-261592/config.json ...
	I1004 03:47:41.181554  186480 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1004 03:47:41.181580  186480 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1004 03:47:41.181594  186480 cache.go:194] Successfully downloaded all kic artifacts
	I1004 03:47:41.181623  186480 start.go:360] acquireMachinesLock for pause-261592: {Name:mk2fe2b2bdb5607bf5cbca6da9d2ceed87ae3025 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:47:41.181685  186480 start.go:364] duration metric: took 37.603µs to acquireMachinesLock for "pause-261592"
	I1004 03:47:41.181708  186480 start.go:96] Skipping create...Using existing machine configuration
	I1004 03:47:41.181717  186480 fix.go:54] fixHost starting: 
	I1004 03:47:41.182018  186480 cli_runner.go:164] Run: docker container inspect pause-261592 --format={{.State.Status}}
	I1004 03:47:41.198527  186480 fix.go:112] recreateIfNeeded on pause-261592: state=Running err=<nil>
	W1004 03:47:41.198575  186480 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 03:47:41.201585  186480 out.go:177] * Updating the running docker "pause-261592" container ...
	I1004 03:47:41.204301  186480 machine.go:93] provisionDockerMachine start ...
	I1004 03:47:41.204423  186480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-261592
	I1004 03:47:41.221157  186480 main.go:141] libmachine: Using SSH client type: native
	I1004 03:47:41.221463  186480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1004 03:47:41.221481  186480 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 03:47:41.357337  186480 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-261592
	
	I1004 03:47:41.357369  186480 ubuntu.go:169] provisioning hostname "pause-261592"
	I1004 03:47:41.357444  186480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-261592
	I1004 03:47:41.377381  186480 main.go:141] libmachine: Using SSH client type: native
	I1004 03:47:41.377705  186480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1004 03:47:41.377721  186480 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-261592 && echo "pause-261592" | sudo tee /etc/hostname
	I1004 03:47:41.526851  186480 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-261592
	
	I1004 03:47:41.526943  186480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-261592
	I1004 03:47:41.544662  186480 main.go:141] libmachine: Using SSH client type: native
	I1004 03:47:41.545548  186480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1004 03:47:41.545582  186480 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-261592' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-261592/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-261592' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:47:41.681146  186480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:47:41.681174  186480 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19546-2238/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-2238/.minikube}
	I1004 03:47:41.681251  186480 ubuntu.go:177] setting up certificates
	I1004 03:47:41.681267  186480 provision.go:84] configureAuth start
	I1004 03:47:41.681335  186480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-261592
	I1004 03:47:41.698672  186480 provision.go:143] copyHostCerts
	I1004 03:47:41.698744  186480 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-2238/.minikube/ca.pem, removing ...
	I1004 03:47:41.698766  186480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.pem
	I1004 03:47:41.698843  186480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/ca.pem (1082 bytes)
	I1004 03:47:41.698984  186480 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-2238/.minikube/cert.pem, removing ...
	I1004 03:47:41.698995  186480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-2238/.minikube/cert.pem
	I1004 03:47:41.699024  186480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/cert.pem (1123 bytes)
	I1004 03:47:41.699085  186480 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-2238/.minikube/key.pem, removing ...
	I1004 03:47:41.699093  186480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-2238/.minikube/key.pem
	I1004 03:47:41.699119  186480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/key.pem (1679 bytes)
	I1004 03:47:41.699169  186480 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem org=jenkins.pause-261592 san=[127.0.0.1 192.168.85.2 localhost minikube pause-261592]
	I1004 03:47:42.065609  186480 provision.go:177] copyRemoteCerts
	I1004 03:47:42.065689  186480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:47:42.065729  186480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-261592
	I1004 03:47:42.082104  186480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/pause-261592/id_rsa Username:docker}
	I1004 03:47:42.183303  186480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:47:42.211835  186480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1004 03:47:42.239248  186480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 03:47:42.266277  186480 provision.go:87] duration metric: took 584.995955ms to configureAuth
	I1004 03:47:42.266307  186480 ubuntu.go:193] setting minikube options for container-runtime
	I1004 03:47:42.266599  186480 config.go:182] Loaded profile config "pause-261592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:47:42.266750  186480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-261592
	I1004 03:47:42.290953  186480 main.go:141] libmachine: Using SSH client type: native
	I1004 03:47:42.294165  186480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1004 03:47:42.294208  186480 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:47:47.723278  186480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:47:47.723298  186480 machine.go:96] duration metric: took 6.518980788s to provisionDockerMachine
	I1004 03:47:47.723309  186480 start.go:293] postStartSetup for "pause-261592" (driver="docker")
	I1004 03:47:47.723321  186480 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:47:47.723395  186480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:47:47.723443  186480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-261592
	I1004 03:47:47.745422  186480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/pause-261592/id_rsa Username:docker}
	I1004 03:47:47.842507  186480 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:47:47.846285  186480 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1004 03:47:47.846323  186480 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1004 03:47:47.846338  186480 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1004 03:47:47.846350  186480 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1004 03:47:47.846365  186480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-2238/.minikube/addons for local assets ...
	I1004 03:47:47.846421  186480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-2238/.minikube/files for local assets ...
	I1004 03:47:47.846506  186480 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem -> 75602.pem in /etc/ssl/certs
	I1004 03:47:47.846609  186480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:47:47.854947  186480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem --> /etc/ssl/certs/75602.pem (1708 bytes)
	I1004 03:47:47.880646  186480 start.go:296] duration metric: took 157.32134ms for postStartSetup
	I1004 03:47:47.880745  186480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 03:47:47.880790  186480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-261592
	I1004 03:47:47.899625  186480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/pause-261592/id_rsa Username:docker}
	I1004 03:47:47.994668  186480 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1004 03:47:47.999947  186480 fix.go:56] duration metric: took 6.818223482s for fixHost
	I1004 03:47:47.999970  186480 start.go:83] releasing machines lock for "pause-261592", held for 6.81827317s
	I1004 03:47:48.000059  186480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-261592
	I1004 03:47:48.019012  186480 ssh_runner.go:195] Run: cat /version.json
	I1004 03:47:48.019072  186480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-261592
	I1004 03:47:48.019353  186480 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:47:48.019402  186480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-261592
	I1004 03:47:48.045748  186480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/pause-261592/id_rsa Username:docker}
	I1004 03:47:48.054687  186480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/pause-261592/id_rsa Username:docker}
	I1004 03:47:48.278695  186480 ssh_runner.go:195] Run: systemctl --version
	I1004 03:47:48.284004  186480 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:47:48.442381  186480 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1004 03:47:48.447017  186480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:47:48.456482  186480 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1004 03:47:48.456559  186480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:47:48.466083  186480 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1004 03:47:48.466109  186480 start.go:495] detecting cgroup driver to use...
	I1004 03:47:48.466141  186480 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1004 03:47:48.466199  186480 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:47:48.479676  186480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:47:48.492700  186480 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:47:48.492772  186480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:47:48.507151  186480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:47:48.520197  186480 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:47:48.671788  186480 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:47:48.822401  186480 docker.go:233] disabling docker service ...
	I1004 03:47:48.822480  186480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:47:48.837464  186480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:47:48.850558  186480 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:47:48.997092  186480 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:47:49.156061  186480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:47:49.170774  186480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:47:49.198698  186480 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:47:49.198828  186480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:47:49.209533  186480 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:47:49.209645  186480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:47:49.220189  186480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:47:49.230909  186480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:47:49.241725  186480 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:47:49.251708  186480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:47:49.263129  186480 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:47:49.273355  186480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:47:49.283774  186480 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:47:49.293566  186480 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:47:49.303099  186480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:47:49.452835  186480 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:47:49.918806  186480 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:47:49.918923  186480 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:47:49.922898  186480 start.go:563] Will wait 60s for crictl version
	I1004 03:47:49.923011  186480 ssh_runner.go:195] Run: which crictl
	I1004 03:47:49.926642  186480 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:47:49.978311  186480 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1004 03:47:49.978495  186480 ssh_runner.go:195] Run: crio --version
	I1004 03:47:50.040387  186480 ssh_runner.go:195] Run: crio --version
	I1004 03:47:50.109939  186480 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1004 03:47:50.112880  186480 cli_runner.go:164] Run: docker network inspect pause-261592 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1004 03:47:50.129517  186480 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1004 03:47:50.135187  186480 kubeadm.go:883] updating cluster {Name:pause-261592 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-261592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false stor
age-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 03:47:50.135324  186480 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:47:50.135378  186480 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:47:50.208368  186480 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 03:47:50.208390  186480 crio.go:433] Images already preloaded, skipping extraction
	I1004 03:47:50.208448  186480 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:47:50.261563  186480 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 03:47:50.261582  186480 cache_images.go:84] Images are preloaded, skipping loading
	I1004 03:47:50.261590  186480 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 crio true true} ...
	I1004 03:47:50.261700  186480 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-261592 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-261592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:47:50.261781  186480 ssh_runner.go:195] Run: crio config
	I1004 03:47:50.331529  186480 cni.go:84] Creating CNI manager for ""
	I1004 03:47:50.331548  186480 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1004 03:47:50.331558  186480 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 03:47:50.331584  186480 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-261592 NodeName:pause-261592 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 03:47:50.331732  186480 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-261592"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 03:47:50.331800  186480 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:47:50.342776  186480 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 03:47:50.342848  186480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 03:47:50.352799  186480 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1004 03:47:50.372930  186480 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:47:50.393036  186480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1004 03:47:50.412827  186480 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1004 03:47:50.417114  186480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:47:50.580467  186480 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:47:50.593162  186480 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/pause-261592 for IP: 192.168.85.2
	I1004 03:47:50.593308  186480 certs.go:194] generating shared ca certs ...
	I1004 03:47:50.593336  186480 certs.go:226] acquiring lock for ca certs: {Name:mk468b07ab6620fd74cefc3667e1a8643008ce5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:47:50.593506  186480 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key
	I1004 03:47:50.593590  186480 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key
	I1004 03:47:50.593606  186480 certs.go:256] generating profile certs ...
	I1004 03:47:50.593711  186480 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/pause-261592/client.key
	I1004 03:47:50.593819  186480 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/pause-261592/apiserver.key.540ee3dd
	I1004 03:47:50.593886  186480 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/pause-261592/proxy-client.key
	I1004 03:47:50.594020  186480 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/7560.pem (1338 bytes)
	W1004 03:47:50.594068  186480 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-2238/.minikube/certs/7560_empty.pem, impossibly tiny 0 bytes
	I1004 03:47:50.594083  186480 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:47:50.594110  186480 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:47:50.594153  186480 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:47:50.594183  186480 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem (1679 bytes)
	I1004 03:47:50.594244  186480 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem (1708 bytes)
	I1004 03:47:50.594942  186480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:47:50.624854  186480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 03:47:50.654284  186480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:47:50.681925  186480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 03:47:50.707103  186480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/pause-261592/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1004 03:47:50.732395  186480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/pause-261592/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 03:47:50.758473  186480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/pause-261592/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:47:50.824557  186480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/pause-261592/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:47:50.923045  186480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/certs/7560.pem --> /usr/share/ca-certificates/7560.pem (1338 bytes)
	I1004 03:47:50.962392  186480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem --> /usr/share/ca-certificates/75602.pem (1708 bytes)
	I1004 03:47:51.062070  186480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:47:51.148189  186480 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 03:47:51.199014  186480 ssh_runner.go:195] Run: openssl version
	I1004 03:47:51.238153  186480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75602.pem && ln -fs /usr/share/ca-certificates/75602.pem /etc/ssl/certs/75602.pem"
	I1004 03:47:51.269141  186480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75602.pem
	I1004 03:47:51.280828  186480 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/75602.pem
	I1004 03:47:51.280966  186480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75602.pem
	I1004 03:47:51.295290  186480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75602.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:47:51.308359  186480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:47:51.323026  186480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:47:51.333102  186480 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:48 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:47:51.333329  186480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:47:51.353500  186480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:47:51.365557  186480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7560.pem && ln -fs /usr/share/ca-certificates/7560.pem /etc/ssl/certs/7560.pem"
	I1004 03:47:51.394909  186480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7560.pem
	I1004 03:47:51.410541  186480 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/7560.pem
	I1004 03:47:51.410676  186480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7560.pem
	I1004 03:47:51.429009  186480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7560.pem /etc/ssl/certs/51391683.0"
	I1004 03:47:51.469132  186480 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:47:51.482621  186480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 03:47:51.497939  186480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 03:47:51.522466  186480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 03:47:51.548200  186480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 03:47:51.576140  186480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 03:47:51.604006  186480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 03:47:51.617452  186480 kubeadm.go:392] StartCluster: {Name:pause-261592 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-261592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage
-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:47:51.617568  186480 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 03:47:51.617665  186480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 03:47:51.697800  186480 cri.go:89] found id: "6e45908eccf2425f42fda0e757ce218c98526882f85c4417b7e5dd06150804d4"
	I1004 03:47:51.697824  186480 cri.go:89] found id: "16f000d572a96304522fbd46b6ebb85bf3fe6123a4f86bf24eeb844063bf5a71"
	I1004 03:47:51.697829  186480 cri.go:89] found id: "1bca1e623975c87aca561b3275a5bc55990583c7a23ab4d23b7595824e768c0a"
	I1004 03:47:51.697833  186480 cri.go:89] found id: "02461d0ecefa6fc0843c5040172ef8feae2e780ed9db3d77adfa73f1de49e8b5"
	I1004 03:47:51.697836  186480 cri.go:89] found id: "ba30065a5829c70a38c30a7b011095b86cf35e7e644f2c47c8f58dd95b27ed2f"
	I1004 03:47:51.697841  186480 cri.go:89] found id: "6e2a5d0b9ab3eb5a96949e350a2f0eec6b4d45fd54384d9fdf9fb6b033044085"
	I1004 03:47:51.697871  186480 cri.go:89] found id: "fb198c02c4ed24f961fb6303226c64528c187c0f217f6e982bc160b51f2db2e1"
	I1004 03:47:51.697882  186480 cri.go:89] found id: "5c89115e27ff1803f2493901481a19373c763d1e1cf1b45b61c9f244f35a1f17"
	I1004 03:47:51.697886  186480 cri.go:89] found id: "20c26555e198ebf41e6877314f76c6c3ec980e2db646eddbd5241397bbd47b93"
	I1004 03:47:51.697894  186480 cri.go:89] found id: "c0e2e70ad4035836e7818c915e0db86d4485f6ff6afabe892db8d1e93822e1ea"
	I1004 03:47:51.697904  186480 cri.go:89] found id: "ffb8dcb7cdf70e6e7e692b2a1a724f48acf77856b5fc252be5e579d9316c71b8"
	I1004 03:47:51.697907  186480 cri.go:89] found id: "62740614906c4108c264aa4ee766e9fff025c5ef0762e785b5a44f65ec991081"
	I1004 03:47:51.697910  186480 cri.go:89] found id: "791224c1c5beaa48a02dd37c47d161a52607518ce5271805ba2b539a363603d9"
	I1004 03:47:51.697914  186480 cri.go:89] found id: "d43427b82f7def3fbcbe40e387ba39f632412a2e72ed6e718610483c1cbff0ce"
	I1004 03:47:51.697918  186480 cri.go:89] found id: "0f52739bcc35ba86f784c063cd2903825f9bafc686012ce3c188937f35f5bb1d"
	I1004 03:47:51.697921  186480 cri.go:89] found id: ""
	I1004 03:47:51.697982  186480 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-261592
helpers_test.go:235: (dbg) docker inspect pause-261592:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d29fdc6e6055eb05112373ebc77fff9c70d5d6aa5c4911f3eba0e4c82eb1ac5",
	        "Created": "2024-10-04T03:46:24.601096244Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 182292,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-04T03:46:24.816583432Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/7d29fdc6e6055eb05112373ebc77fff9c70d5d6aa5c4911f3eba0e4c82eb1ac5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d29fdc6e6055eb05112373ebc77fff9c70d5d6aa5c4911f3eba0e4c82eb1ac5/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d29fdc6e6055eb05112373ebc77fff9c70d5d6aa5c4911f3eba0e4c82eb1ac5/hosts",
	        "LogPath": "/var/lib/docker/containers/7d29fdc6e6055eb05112373ebc77fff9c70d5d6aa5c4911f3eba0e4c82eb1ac5/7d29fdc6e6055eb05112373ebc77fff9c70d5d6aa5c4911f3eba0e4c82eb1ac5-json.log",
	        "Name": "/pause-261592",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-261592:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-261592",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a1effe3ed0b396717f632fce8d2c5360ecacac03af23b1791dbe7a27100f0585-init/diff:/var/lib/docker/overlay2/113409e5ac8a20e78db05ebf8d2720874d391240a7f47648e5e10a2a0c89288f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a1effe3ed0b396717f632fce8d2c5360ecacac03af23b1791dbe7a27100f0585/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a1effe3ed0b396717f632fce8d2c5360ecacac03af23b1791dbe7a27100f0585/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a1effe3ed0b396717f632fce8d2c5360ecacac03af23b1791dbe7a27100f0585/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-261592",
	                "Source": "/var/lib/docker/volumes/pause-261592/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-261592",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-261592",
	                "name.minikube.sigs.k8s.io": "pause-261592",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8e88b121bf20e8ab90009c2a0462f4fab2693d40d230d0fc17bba1a3df6eb2af",
	            "SandboxKey": "/var/run/docker/netns/8e88b121bf20",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33023"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33024"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33025"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-261592": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "791985b9a1c9615b77e486ac6441ab7cce98e79371b4be0d691b7e1e70cb02f4",
	                    "EndpointID": "08647013cbbafbc535742849826d771b05dd8715008398f52eaaf067bef6f716",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-261592",
	                        "7d29fdc6e605"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-261592 -n pause-261592
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-261592 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-261592 logs -n 25: (2.584613066s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p missing-upgrade-014414      | minikube                  | jenkins | v1.26.0 | 04 Oct 24 03:40 UTC | 04 Oct 24 03:42 UTC |
	|         | --memory=2200 --driver=docker  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-324508         | NoKubernetes-324508       | jenkins | v1.34.0 | 04 Oct 24 03:41 UTC | 04 Oct 24 03:41 UTC |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-324508         | NoKubernetes-324508       | jenkins | v1.34.0 | 04 Oct 24 03:41 UTC | 04 Oct 24 03:41 UTC |
	| start   | -p NoKubernetes-324508         | NoKubernetes-324508       | jenkins | v1.34.0 | 04 Oct 24 03:41 UTC | 04 Oct 24 03:41 UTC |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-324508 sudo    | NoKubernetes-324508       | jenkins | v1.34.0 | 04 Oct 24 03:41 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-324508         | NoKubernetes-324508       | jenkins | v1.34.0 | 04 Oct 24 03:42 UTC | 04 Oct 24 03:42 UTC |
	| start   | -p NoKubernetes-324508         | NoKubernetes-324508       | jenkins | v1.34.0 | 04 Oct 24 03:42 UTC | 04 Oct 24 03:42 UTC |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-324508 sudo    | NoKubernetes-324508       | jenkins | v1.34.0 | 04 Oct 24 03:42 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-324508         | NoKubernetes-324508       | jenkins | v1.34.0 | 04 Oct 24 03:42 UTC | 04 Oct 24 03:42 UTC |
	| start   | -p kubernetes-upgrade-904287   | kubernetes-upgrade-904287 | jenkins | v1.34.0 | 04 Oct 24 03:42 UTC | 04 Oct 24 03:43 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-014414      | missing-upgrade-014414    | jenkins | v1.34.0 | 04 Oct 24 03:42 UTC | 04 Oct 24 03:43 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-904287   | kubernetes-upgrade-904287 | jenkins | v1.34.0 | 04 Oct 24 03:43 UTC | 04 Oct 24 03:43 UTC |
	| start   | -p kubernetes-upgrade-904287   | kubernetes-upgrade-904287 | jenkins | v1.34.0 | 04 Oct 24 03:43 UTC | 04 Oct 24 03:48 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-014414      | missing-upgrade-014414    | jenkins | v1.34.0 | 04 Oct 24 03:43 UTC | 04 Oct 24 03:43 UTC |
	| start   | -p stopped-upgrade-917470      | minikube                  | jenkins | v1.26.0 | 04 Oct 24 03:43 UTC | 04 Oct 24 03:44 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --vm-driver=docker             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-917470 stop    | minikube                  | jenkins | v1.26.0 | 04 Oct 24 03:44 UTC | 04 Oct 24 03:44 UTC |
	| start   | -p stopped-upgrade-917470      | stopped-upgrade-917470    | jenkins | v1.34.0 | 04 Oct 24 03:44 UTC | 04 Oct 24 03:44 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-917470      | stopped-upgrade-917470    | jenkins | v1.34.0 | 04 Oct 24 03:44 UTC | 04 Oct 24 03:44 UTC |
	| start   | -p running-upgrade-505617      | minikube                  | jenkins | v1.26.0 | 04 Oct 24 03:44 UTC | 04 Oct 24 03:45 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --vm-driver=docker             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-505617      | running-upgrade-505617    | jenkins | v1.34.0 | 04 Oct 24 03:45 UTC | 04 Oct 24 03:46 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-505617      | running-upgrade-505617    | jenkins | v1.34.0 | 04 Oct 24 03:46 UTC | 04 Oct 24 03:46 UTC |
	| start   | -p pause-261592 --memory=2048  | pause-261592              | jenkins | v1.34.0 | 04 Oct 24 03:46 UTC | 04 Oct 24 03:47 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-261592                | pause-261592              | jenkins | v1.34.0 | 04 Oct 24 03:47 UTC | 04 Oct 24 03:48 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-904287   | kubernetes-upgrade-904287 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-904287   | kubernetes-upgrade-904287 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 03:48:04
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 03:48:04.672636  188340 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:48:04.672769  188340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:48:04.672779  188340 out.go:358] Setting ErrFile to fd 2...
	I1004 03:48:04.672785  188340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:48:04.673025  188340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-2238/.minikube/bin
	I1004 03:48:04.673437  188340 out.go:352] Setting JSON to false
	I1004 03:48:04.674382  188340 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5430,"bootTime":1728008255,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1004 03:48:04.674462  188340 start.go:139] virtualization:  
	I1004 03:48:04.677606  188340 out.go:177] * [kubernetes-upgrade-904287] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1004 03:48:04.681010  188340 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:48:04.681075  188340 notify.go:220] Checking for updates...
	I1004 03:48:04.686496  188340 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:48:04.689015  188340 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-2238/kubeconfig
	I1004 03:48:04.691539  188340 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-2238/.minikube
	I1004 03:48:04.694149  188340 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1004 03:48:04.696728  188340 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:48:04.699972  188340 config.go:182] Loaded profile config "kubernetes-upgrade-904287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:48:04.700867  188340 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:48:04.730249  188340 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1004 03:48:04.730367  188340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:48:04.787957  188340 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:62 SystemTime:2024-10-04 03:48:04.778060518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:48:04.788069  188340 docker.go:318] overlay module found
	I1004 03:48:04.790979  188340 out.go:177] * Using the docker driver based on existing profile
	I1004 03:48:04.793591  188340 start.go:297] selected driver: docker
	I1004 03:48:04.793613  188340 start.go:901] validating driver "docker" against &{Name:kubernetes-upgrade-904287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-904287 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:48:04.793748  188340 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:48:04.794386  188340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:48:04.844752  188340 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:62 SystemTime:2024-10-04 03:48:04.834438127 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:48:04.845136  188340 cni.go:84] Creating CNI manager for ""
	I1004 03:48:04.845190  188340 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1004 03:48:04.845378  188340 start.go:340] cluster config:
	{Name:kubernetes-upgrade-904287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-904287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:48:04.850024  188340 out.go:177] * Starting "kubernetes-upgrade-904287" primary control-plane node in "kubernetes-upgrade-904287" cluster
	I1004 03:48:04.853124  188340 cache.go:121] Beginning downloading kic base image for docker with crio
	I1004 03:48:04.856019  188340 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1004 03:48:04.858579  188340 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1004 03:48:04.858522  188340 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:48:04.858668  188340 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1004 03:48:04.858681  188340 cache.go:56] Caching tarball of preloaded images
	I1004 03:48:04.858788  188340 preload.go:172] Found /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1004 03:48:04.858797  188340 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:48:04.858899  188340 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/kubernetes-upgrade-904287/config.json ...
	I1004 03:48:04.879277  188340 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1004 03:48:04.879296  188340 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1004 03:48:04.879318  188340 cache.go:194] Successfully downloaded all kic artifacts
	I1004 03:48:04.879346  188340 start.go:360] acquireMachinesLock for kubernetes-upgrade-904287: {Name:mkd3caf1b1dafbdc83a0b9efd07903cf90ba4f07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:48:04.879400  188340 start.go:364] duration metric: took 33.829µs to acquireMachinesLock for "kubernetes-upgrade-904287"
	I1004 03:48:04.879419  188340 start.go:96] Skipping create...Using existing machine configuration
	I1004 03:48:04.879425  188340 fix.go:54] fixHost starting: 
	I1004 03:48:04.879735  188340 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-904287 --format={{.State.Status}}
	I1004 03:48:04.897322  188340 fix.go:112] recreateIfNeeded on kubernetes-upgrade-904287: state=Running err=<nil>
	W1004 03:48:04.897350  188340 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 03:48:04.900237  188340 out.go:177] * Updating the running docker "kubernetes-upgrade-904287" container ...
	I1004 03:48:01.165837  186480 node_ready.go:49] node "pause-261592" has status "Ready":"True"
	I1004 03:48:01.165861  186480 node_ready.go:38] duration metric: took 8.899138685s for node "pause-261592" to be "Ready" ...
	I1004 03:48:01.165870  186480 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:48:01.165911  186480 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1004 03:48:01.165923  186480 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1004 03:48:01.293984  186480 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-42rv6" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:01.357620  186480 pod_ready.go:93] pod "coredns-7c65d6cfc9-42rv6" in "kube-system" namespace has status "Ready":"True"
	I1004 03:48:01.357688  186480 pod_ready.go:82] duration metric: took 63.618359ms for pod "coredns-7c65d6cfc9-42rv6" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:01.357716  186480 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9n4vl" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:01.388354  186480 pod_ready.go:93] pod "coredns-7c65d6cfc9-9n4vl" in "kube-system" namespace has status "Ready":"True"
	I1004 03:48:01.388432  186480 pod_ready.go:82] duration metric: took 30.687433ms for pod "coredns-7c65d6cfc9-9n4vl" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:01.388468  186480 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-261592" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:01.408413  186480 pod_ready.go:93] pod "etcd-pause-261592" in "kube-system" namespace has status "Ready":"True"
	I1004 03:48:01.408485  186480 pod_ready.go:82] duration metric: took 19.986252ms for pod "etcd-pause-261592" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:01.408517  186480 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-261592" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:01.426612  186480 pod_ready.go:93] pod "kube-apiserver-pause-261592" in "kube-system" namespace has status "Ready":"True"
	I1004 03:48:01.426687  186480 pod_ready.go:82] duration metric: took 18.14772ms for pod "kube-apiserver-pause-261592" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:01.426718  186480 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-261592" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:03.443834  186480 pod_ready.go:103] pod "kube-controller-manager-pause-261592" in "kube-system" namespace has status "Ready":"False"
	I1004 03:48:05.451826  186480 pod_ready.go:103] pod "kube-controller-manager-pause-261592" in "kube-system" namespace has status "Ready":"False"
	I1004 03:48:05.934265  186480 pod_ready.go:93] pod "kube-controller-manager-pause-261592" in "kube-system" namespace has status "Ready":"True"
	I1004 03:48:05.934288  186480 pod_ready.go:82] duration metric: took 4.507549283s for pod "kube-controller-manager-pause-261592" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:05.934301  186480 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k84f2" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:05.941889  186480 pod_ready.go:93] pod "kube-proxy-k84f2" in "kube-system" namespace has status "Ready":"True"
	I1004 03:48:05.941910  186480 pod_ready.go:82] duration metric: took 7.601752ms for pod "kube-proxy-k84f2" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:05.941921  186480 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-261592" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:05.949683  186480 pod_ready.go:93] pod "kube-scheduler-pause-261592" in "kube-system" namespace has status "Ready":"True"
	I1004 03:48:05.949759  186480 pod_ready.go:82] duration metric: took 7.818788ms for pod "kube-scheduler-pause-261592" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:05.949784  186480 pod_ready.go:39] duration metric: took 4.783901705s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:48:05.949828  186480 api_server.go:52] waiting for apiserver process to appear ...
	I1004 03:48:05.949923  186480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 03:48:05.972446  186480 api_server.go:72] duration metric: took 14.132740139s to wait for apiserver process to appear ...
	I1004 03:48:05.972521  186480 api_server.go:88] waiting for apiserver healthz status ...
	I1004 03:48:05.972558  186480 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1004 03:48:05.988185  186480 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1004 03:48:05.990310  186480 api_server.go:141] control plane version: v1.31.1
	I1004 03:48:05.990388  186480 api_server.go:131] duration metric: took 17.844722ms to wait for apiserver health ...
	I1004 03:48:05.990418  186480 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 03:48:05.998933  186480 system_pods.go:59] 8 kube-system pods found
	I1004 03:48:05.999020  186480 system_pods.go:61] "coredns-7c65d6cfc9-42rv6" [9c0b7172-82ef-42e6-bf7e-126917a5f027] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 03:48:05.999045  186480 system_pods.go:61] "coredns-7c65d6cfc9-9n4vl" [3c8258a1-0c38-4c03-8d36-ee9b2606feb9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 03:48:05.999084  186480 system_pods.go:61] "etcd-pause-261592" [2e607ad6-bed2-441f-a635-b3a7fcdb6127] Running
	I1004 03:48:05.999121  186480 system_pods.go:61] "kindnet-srv54" [a063f599-caec-4865-9852-66e0765f7359] Running
	I1004 03:48:05.999146  186480 system_pods.go:61] "kube-apiserver-pause-261592" [8f59c812-b539-431b-8f39-08013081ddbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 03:48:05.999184  186480 system_pods.go:61] "kube-controller-manager-pause-261592" [ce2f6ad2-7642-484e-be9c-9181227ed799] Running
	I1004 03:48:05.999207  186480 system_pods.go:61] "kube-proxy-k84f2" [7c42b79f-7f6b-4035-a550-f5c278021ea2] Running
	I1004 03:48:05.999244  186480 system_pods.go:61] "kube-scheduler-pause-261592" [b84c9d60-d1f3-4466-ae61-f001cff778b4] Running
	I1004 03:48:05.999270  186480 system_pods.go:74] duration metric: took 8.824677ms to wait for pod list to return data ...
	I1004 03:48:05.999293  186480 default_sa.go:34] waiting for default service account to be created ...
	I1004 03:48:06.170316  186480 default_sa.go:45] found service account: "default"
	I1004 03:48:06.170390  186480 default_sa.go:55] duration metric: took 171.061748ms for default service account to be created ...
	I1004 03:48:06.170415  186480 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 03:48:06.374023  186480 system_pods.go:86] 8 kube-system pods found
	I1004 03:48:06.374104  186480 system_pods.go:89] "coredns-7c65d6cfc9-42rv6" [9c0b7172-82ef-42e6-bf7e-126917a5f027] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 03:48:06.374131  186480 system_pods.go:89] "coredns-7c65d6cfc9-9n4vl" [3c8258a1-0c38-4c03-8d36-ee9b2606feb9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 03:48:06.374169  186480 system_pods.go:89] "etcd-pause-261592" [2e607ad6-bed2-441f-a635-b3a7fcdb6127] Running
	I1004 03:48:06.374194  186480 system_pods.go:89] "kindnet-srv54" [a063f599-caec-4865-9852-66e0765f7359] Running
	I1004 03:48:06.374219  186480 system_pods.go:89] "kube-apiserver-pause-261592" [8f59c812-b539-431b-8f39-08013081ddbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 03:48:06.374255  186480 system_pods.go:89] "kube-controller-manager-pause-261592" [ce2f6ad2-7642-484e-be9c-9181227ed799] Running
	I1004 03:48:06.374279  186480 system_pods.go:89] "kube-proxy-k84f2" [7c42b79f-7f6b-4035-a550-f5c278021ea2] Running
	I1004 03:48:06.374298  186480 system_pods.go:89] "kube-scheduler-pause-261592" [b84c9d60-d1f3-4466-ae61-f001cff778b4] Running
	I1004 03:48:06.374335  186480 system_pods.go:126] duration metric: took 203.900239ms to wait for k8s-apps to be running ...
	I1004 03:48:06.374362  186480 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 03:48:06.374451  186480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:48:06.389790  186480 system_svc.go:56] duration metric: took 15.419169ms WaitForService to wait for kubelet
	I1004 03:48:06.389816  186480 kubeadm.go:582] duration metric: took 14.550115674s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:48:06.389834  186480 node_conditions.go:102] verifying NodePressure condition ...
	I1004 03:48:06.570304  186480 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1004 03:48:06.570333  186480 node_conditions.go:123] node cpu capacity is 2
	I1004 03:48:06.570345  186480 node_conditions.go:105] duration metric: took 180.505493ms to run NodePressure ...
	I1004 03:48:06.570357  186480 start.go:241] waiting for startup goroutines ...
	I1004 03:48:06.570365  186480 start.go:246] waiting for cluster config update ...
	I1004 03:48:06.570372  186480 start.go:255] writing updated cluster config ...
	I1004 03:48:06.570701  186480 ssh_runner.go:195] Run: rm -f paused
	I1004 03:48:06.671683  186480 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 03:48:06.676903  186480 out.go:177] * Done! kubectl is now configured to use "pause-261592" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 04 03:47:51 pause-261592 crio[2430]: time="2024-10-04 03:47:51.449919725Z" level=info msg="Started container" PID=2619 containerID=6e2a5d0b9ab3eb5a96949e350a2f0eec6b4d45fd54384d9fdf9fb6b033044085 description=kube-system/kube-controller-manager-pause-261592/kube-controller-manager id=771738e9-92d2-49fc-b767-e364d91dc6a7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5015fc0f756f65f64c10f8ac76db6b5d66c8011d5e10b8cf9c1369465196bdd6
	Oct 04 03:47:51 pause-261592 crio[2430]: time="2024-10-04 03:47:51.540745017Z" level=info msg="Created container 16f000d572a96304522fbd46b6ebb85bf3fe6123a4f86bf24eeb844063bf5a71: kube-system/kube-apiserver-pause-261592/kube-apiserver" id=c6ecbf4c-b9ff-4b9b-9e95-9b612a4c7568 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 04 03:47:51 pause-261592 crio[2430]: time="2024-10-04 03:47:51.541638803Z" level=info msg="Starting container: 16f000d572a96304522fbd46b6ebb85bf3fe6123a4f86bf24eeb844063bf5a71" id=b9bd6d8a-8d8b-40c2-bfc5-1b7e28a94543 name=/runtime.v1.RuntimeService/StartContainer
	Oct 04 03:47:51 pause-261592 crio[2430]: time="2024-10-04 03:47:51.549551470Z" level=info msg="Created container fb198c02c4ed24f961fb6303226c64528c187c0f217f6e982bc160b51f2db2e1: kube-system/kube-scheduler-pause-261592/kube-scheduler" id=3953d026-49f0-4c44-836b-6b9c4a6acefc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 04 03:47:51 pause-261592 crio[2430]: time="2024-10-04 03:47:51.550200626Z" level=info msg="Starting container: fb198c02c4ed24f961fb6303226c64528c187c0f217f6e982bc160b51f2db2e1" id=b7e84d80-fb66-4a3e-9ef8-03ed63b32095 name=/runtime.v1.RuntimeService/StartContainer
	Oct 04 03:47:51 pause-261592 crio[2430]: time="2024-10-04 03:47:51.555472216Z" level=info msg="Created container 6e45908eccf2425f42fda0e757ce218c98526882f85c4417b7e5dd06150804d4: kube-system/coredns-7c65d6cfc9-42rv6/coredns" id=1e3bef98-cb67-4941-969f-178a70962f45 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 04 03:47:51 pause-261592 crio[2430]: time="2024-10-04 03:47:51.556158228Z" level=info msg="Starting container: 6e45908eccf2425f42fda0e757ce218c98526882f85c4417b7e5dd06150804d4" id=1906e354-fcc1-4ff0-92dc-ef65de86ae0a name=/runtime.v1.RuntimeService/StartContainer
	Oct 04 03:47:51 pause-261592 crio[2430]: time="2024-10-04 03:47:51.569749456Z" level=info msg="Started container" PID=2714 containerID=16f000d572a96304522fbd46b6ebb85bf3fe6123a4f86bf24eeb844063bf5a71 description=kube-system/kube-apiserver-pause-261592/kube-apiserver id=b9bd6d8a-8d8b-40c2-bfc5-1b7e28a94543 name=/runtime.v1.RuntimeService/StartContainer sandboxID=75312aca7f362ccfeb58e52c61f54ca5e5c166f153dc9f29163c90ddc460e347
	Oct 04 03:47:51 pause-261592 crio[2430]: time="2024-10-04 03:47:51.574394592Z" level=info msg="Started container" PID=2641 containerID=fb198c02c4ed24f961fb6303226c64528c187c0f217f6e982bc160b51f2db2e1 description=kube-system/kube-scheduler-pause-261592/kube-scheduler id=b7e84d80-fb66-4a3e-9ef8-03ed63b32095 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d9b6da3b39af1b4419516eb76cf4c25d86d5723f9abfd18fc27c8ddff8b5e55
	Oct 04 03:47:51 pause-261592 crio[2430]: time="2024-10-04 03:47:51.606284493Z" level=info msg="Started container" PID=2750 containerID=6e45908eccf2425f42fda0e757ce218c98526882f85c4417b7e5dd06150804d4 description=kube-system/coredns-7c65d6cfc9-42rv6/coredns id=1906e354-fcc1-4ff0-92dc-ef65de86ae0a name=/runtime.v1.RuntimeService/StartContainer sandboxID=5c79a618d02624fd4d66f31b14489fb4b8ee1cad6bfa65253c5094ea36627a59
	Oct 04 03:47:52 pause-261592 crio[2430]: time="2024-10-04 03:47:52.022221885Z" level=info msg="Created container e63018b158a7bd670279f3703aa2d851093804185d342910399a1086f42f07f5: kube-system/kube-proxy-k84f2/kube-proxy" id=686324ba-d342-479b-92c1-5d8cdc0cdb2c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 04 03:47:52 pause-261592 crio[2430]: time="2024-10-04 03:47:52.023497470Z" level=info msg="Starting container: e63018b158a7bd670279f3703aa2d851093804185d342910399a1086f42f07f5" id=83943def-4d88-4cfb-9ecc-e07961a5fc9f name=/runtime.v1.RuntimeService/StartContainer
	Oct 04 03:47:52 pause-261592 crio[2430]: time="2024-10-04 03:47:52.278812962Z" level=info msg="Started container" PID=2720 containerID=e63018b158a7bd670279f3703aa2d851093804185d342910399a1086f42f07f5 description=kube-system/kube-proxy-k84f2/kube-proxy id=83943def-4d88-4cfb-9ecc-e07961a5fc9f name=/runtime.v1.RuntimeService/StartContainer sandboxID=2f9f23612631fecc50dc6c548f839c9511cc9c19a2bb2d72a0d4c94e72a98301
	Oct 04 03:48:01 pause-261592 crio[2430]: time="2024-10-04 03:48:01.974054571Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Oct 04 03:48:01 pause-261592 crio[2430]: time="2024-10-04 03:48:01.993352151Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 04 03:48:01 pause-261592 crio[2430]: time="2024-10-04 03:48:01.993385316Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 04 03:48:01 pause-261592 crio[2430]: time="2024-10-04 03:48:01.993401077Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Oct 04 03:48:02 pause-261592 crio[2430]: time="2024-10-04 03:48:02.011399189Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 04 03:48:02 pause-261592 crio[2430]: time="2024-10-04 03:48:02.011433420Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 04 03:48:02 pause-261592 crio[2430]: time="2024-10-04 03:48:02.011449321Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Oct 04 03:48:02 pause-261592 crio[2430]: time="2024-10-04 03:48:02.030440391Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 04 03:48:02 pause-261592 crio[2430]: time="2024-10-04 03:48:02.030477461Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 04 03:48:02 pause-261592 crio[2430]: time="2024-10-04 03:48:02.030494355Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Oct 04 03:48:02 pause-261592 crio[2430]: time="2024-10-04 03:48:02.049180449Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 04 03:48:02 pause-261592 crio[2430]: time="2024-10-04 03:48:02.049471239Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6e45908eccf24       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   17 seconds ago       Running             coredns                   1                   5c79a618d0262       coredns-7c65d6cfc9-42rv6
	16f000d572a96       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   17 seconds ago       Running             kube-apiserver            1                   75312aca7f362       kube-apiserver-pause-261592
	e63018b158a7b       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d   17 seconds ago       Running             kube-proxy                1                   2f9f23612631f       kube-proxy-k84f2
	1bca1e623975c       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51   17 seconds ago       Running             kindnet-cni               1                   79aac73c73747       kindnet-srv54
	02461d0ecefa6       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da   17 seconds ago       Running             etcd                      1                   087d654c5df5d       etcd-pause-261592
	ba30065a5829c       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   17 seconds ago       Running             coredns                   1                   b2eb1f203942d       coredns-7c65d6cfc9-9n4vl
	6e2a5d0b9ab3e       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   17 seconds ago       Running             kube-controller-manager   1                   5015fc0f756f6       kube-controller-manager-pause-261592
	fb198c02c4ed2       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d   17 seconds ago       Running             kube-scheduler            1                   1d9b6da3b39af       kube-scheduler-pause-261592
	5c89115e27ff1       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   30 seconds ago       Exited              coredns                   0                   5c79a618d0262       coredns-7c65d6cfc9-42rv6
	20c26555e198e       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   30 seconds ago       Exited              coredns                   0                   b2eb1f203942d       coredns-7c65d6cfc9-9n4vl
	c0e2e70ad4035       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d   About a minute ago   Exited              kube-proxy                0                   2f9f23612631f       kube-proxy-k84f2
	ffb8dcb7cdf70       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51   About a minute ago   Exited              kindnet-cni               0                   79aac73c73747       kindnet-srv54
	62740614906c4       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da   About a minute ago   Exited              etcd                      0                   087d654c5df5d       etcd-pause-261592
	791224c1c5bea       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   About a minute ago   Exited              kube-apiserver            0                   75312aca7f362       kube-apiserver-pause-261592
	d43427b82f7de       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   About a minute ago   Exited              kube-controller-manager   0                   5015fc0f756f6       kube-controller-manager-pause-261592
	0f52739bcc35b       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d   About a minute ago   Exited              kube-scheduler            0                   1d9b6da3b39af       kube-scheduler-pause-261592
	
	
	==> coredns [20c26555e198ebf41e6877314f76c6c3ec980e2db646eddbd5241397bbd47b93] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49372 - 12140 "HINFO IN 4977033123192698539.7224489375946866723. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019174511s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5c89115e27ff1803f2493901481a19373c763d1e1cf1b45b61c9f244f35a1f17] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:32891 - 56679 "HINFO IN 5884894497945970033.5634341614239618359. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020473488s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6e45908eccf2425f42fda0e757ce218c98526882f85c4417b7e5dd06150804d4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40796 - 21458 "HINFO IN 5834418650777757923.768020086510170513. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.031987598s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ba30065a5829c70a38c30a7b011095b86cf35e7e644f2c47c8f58dd95b27ed2f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:45938 - 32599 "HINFO IN 907179989567173052.217953983079795387. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.017102979s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               pause-261592
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-261592
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=pause-261592
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T03_46_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:46:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-261592
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:47:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:47:37 +0000   Fri, 04 Oct 2024 03:46:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:47:37 +0000   Fri, 04 Oct 2024 03:46:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:47:37 +0000   Fri, 04 Oct 2024 03:46:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:47:37 +0000   Fri, 04 Oct 2024 03:47:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-261592
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2df4eb40c314411ac012375b7a19ec0
	  System UUID:                e601b0c3-07cc-400d-8910-290003d13814
	  Boot ID:                    cc975b9c-d4f7-443e-a63b-68cdfd7ad286
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-42rv6                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     73s
	  kube-system                 coredns-7c65d6cfc9-9n4vl                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     73s
	  kube-system                 etcd-pause-261592                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         78s
	  kube-system                 kindnet-srv54                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      73s
	  kube-system                 kube-apiserver-pause-261592             250m (12%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-controller-manager-pause-261592    200m (10%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-proxy-k84f2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-scheduler-pause-261592             100m (5%)     0 (0%)      0 (0%)           0 (0%)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 72s   kube-proxy       
	  Normal   Starting                 7s    kube-proxy       
	  Normal   Starting                 78s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 78s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  78s   kubelet          Node pause-261592 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    78s   kubelet          Node pause-261592 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     78s   kubelet          Node pause-261592 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           74s   node-controller  Node pause-261592 event: Registered Node pause-261592 in Controller
	  Normal   NodeReady                31s   kubelet          Node pause-261592 status is now: NodeReady
	  Normal   RegisteredNode           5s    node-controller  Node pause-261592 event: Registered Node pause-261592 in Controller
	
	
	==> dmesg <==
	[Oct 4 02:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015570] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.529270] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.049348] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.015318] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.608453] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.834894] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 4 03:11] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 4 03:45] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [02461d0ecefa6fc0843c5040172ef8feae2e780ed9db3d77adfa73f1de49e8b5] <==
	{"level":"info","ts":"2024-10-04T03:47:51.745389Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-04T03:47:51.745540Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-04T03:47:51.745578Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-04T03:47:51.746726Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:47:51.751899Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-04T03:47:51.754769Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-10-04T03:47:51.761411Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-10-04T03:47:51.762865Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-04T03:47:51.762958Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-04T03:47:51.870968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-04T03:47:51.871083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-04T03:47:51.871155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-10-04T03:47:51.871211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2024-10-04T03:47:51.871250Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2024-10-04T03:47:51.871308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2024-10-04T03:47:51.871341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2024-10-04T03:47:51.879389Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-261592 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T03:47:51.879443Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:47:51.879855Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T03:47:51.879962Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T03:47:51.887325Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:47:51.888286Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:47:51.889151Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:47:51.893947Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:47:51.894799Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> etcd [62740614906c4108c264aa4ee766e9fff025c5ef0762e785b5a44f65ec991081] <==
	{"level":"info","ts":"2024-10-04T03:46:43.605333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2024-10-04T03:46:43.605341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-10-04T03:46:43.609430Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-261592 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T03:46:43.609471Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:46:43.609727Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:46:43.612468Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:46:43.613429Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:46:43.614472Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:46:43.615400Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:46:43.622448Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2024-10-04T03:46:43.617432Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:46:43.623053Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:46:43.623126Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:46:43.626664Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T03:46:43.626735Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T03:47:42.459423Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-04T03:47:42.459492Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-261592","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"warn","ts":"2024-10-04T03:47:42.459579Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:47:42.459672Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:47:42.556006Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:47:42.556072Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-04T03:47:42.556135Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2024-10-04T03:47:42.558203Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-10-04T03:47:42.558356Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-10-04T03:47:42.558371Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-261592","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 03:48:09 up  1:30,  0 users,  load average: 5.52, 3.34, 2.48
	Linux pause-261592 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1bca1e623975c87aca561b3275a5bc55990583c7a23ab4d23b7595824e768c0a] <==
	I1004 03:47:51.527746       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1004 03:47:51.532039       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1004 03:47:51.532258       1 main.go:148] setting mtu 1500 for CNI 
	I1004 03:47:51.532307       1 main.go:178] kindnetd IP family: "ipv4"
	I1004 03:47:51.532349       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1004 03:47:51.974021       1 controller.go:334] Starting controller kube-network-policies
	I1004 03:47:52.016198       1 controller.go:338] Waiting for informer caches to sync
	I1004 03:47:52.021316       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I1004 03:48:01.321806       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1004 03:48:01.321860       1 metrics.go:61] Registering metrics
	I1004 03:48:01.321941       1 controller.go:374] Syncing nftables rules
	I1004 03:48:01.973755       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1004 03:48:01.973845       1 main.go:299] handling current node
	
	
	==> kindnet [ffb8dcb7cdf70e6e7e692b2a1a724f48acf77856b5fc252be5e579d9316c71b8] <==
	W1004 03:47:26.819821       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1004 03:47:26.819913       1 trace.go:236] Trace[398187072]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (04-Oct-2024 03:46:56.818) (total time: 30001ms):
	Trace[398187072]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:47:26.819)
	Trace[398187072]: [30.00112748s] [30.00112748s] END
	E1004 03:47:26.819937       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	W1004 03:47:26.819825       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	W1004 03:47:26.820011       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1004 03:47:26.820066       1 trace.go:236] Trace[632445758]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (04-Oct-2024 03:46:56.819) (total time: 30000ms):
	Trace[632445758]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (03:47:26.820)
	Trace[632445758]: [30.000840621s] [30.000840621s] END
	E1004 03:47:26.820081       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1004 03:47:26.820037       1 trace.go:236] Trace[1197916360]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (04-Oct-2024 03:46:56.818) (total time: 30001ms):
	Trace[1197916360]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:47:26.819)
	Trace[1197916360]: [30.001518461s] [30.001518461s] END
	E1004 03:47:26.820094       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	W1004 03:47:26.820218       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1004 03:47:26.820268       1 trace.go:236] Trace[258284427]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (04-Oct-2024 03:46:56.819) (total time: 30001ms):
	Trace[258284427]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:47:26.820)
	Trace[258284427]: [30.00110097s] [30.00110097s] END
	E1004 03:47:26.820284       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1004 03:47:28.318774       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1004 03:47:28.318822       1 metrics.go:61] Registering metrics
	I1004 03:47:28.318882       1 controller.go:374] Syncing nftables rules
	I1004 03:47:36.825274       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1004 03:47:36.825336       1 main.go:299] handling current node
	
	
	==> kube-apiserver [16f000d572a96304522fbd46b6ebb85bf3fe6123a4f86bf24eeb844063bf5a71] <==
	I1004 03:48:00.799840       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1004 03:48:00.799850       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1004 03:48:00.743809       1 controller.go:119] Starting legacy_token_tracking_controller
	I1004 03:48:01.057872       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I1004 03:48:01.210902       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1004 03:48:01.265633       1 shared_informer.go:320] Caches are synced for configmaps
	I1004 03:48:01.302692       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 03:48:01.302720       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 03:48:01.303107       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1004 03:48:01.303375       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1004 03:48:01.309779       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1004 03:48:01.309843       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1004 03:48:01.310182       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1004 03:48:01.310204       1 policy_source.go:224] refreshing policies
	I1004 03:48:01.310820       1 aggregator.go:171] initial CRD sync complete...
	I1004 03:48:01.310840       1 autoregister_controller.go:144] Starting autoregister controller
	I1004 03:48:01.310846       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1004 03:48:01.310852       1 cache.go:39] Caches are synced for autoregister controller
	I1004 03:48:01.311001       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1004 03:48:01.356860       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1004 03:48:01.361711       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1004 03:48:01.367843       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:48:01.801601       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 03:48:04.076831       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 03:48:04.178833       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [791224c1c5beaa48a02dd37c47d161a52607518ce5271805ba2b539a363603d9] <==
	W1004 03:47:42.519035       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519117       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519180       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519237       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519295       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519348       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519406       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519460       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519518       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519576       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519627       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519684       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519988       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.522580       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.522650       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.522719       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.522768       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.522806       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.522857       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.524679       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.524735       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.524773       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.524812       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.524850       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.524887       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [6e2a5d0b9ab3eb5a96949e350a2f0eec6b4d45fd54384d9fdf9fb6b033044085] <==
	I1004 03:48:03.861024       1 shared_informer.go:320] Caches are synced for taint
	I1004 03:48:03.861242       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1004 03:48:03.861729       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-261592"
	I1004 03:48:03.861898       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1004 03:48:03.877453       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1004 03:48:03.877562       1 shared_informer.go:320] Caches are synced for ephemeral
	I1004 03:48:03.877573       1 shared_informer.go:320] Caches are synced for deployment
	I1004 03:48:03.877582       1 shared_informer.go:320] Caches are synced for daemon sets
	I1004 03:48:03.877592       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1004 03:48:03.881942       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1004 03:48:03.887560       1 shared_informer.go:320] Caches are synced for GC
	I1004 03:48:03.918600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="60.624463ms"
	I1004 03:48:03.924472       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="64.589µs"
	I1004 03:48:03.923420       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1004 03:48:03.929766       1 shared_informer.go:320] Caches are synced for resource quota
	I1004 03:48:03.974905       1 shared_informer.go:320] Caches are synced for endpoint
	I1004 03:48:03.979369       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1004 03:48:03.995950       1 shared_informer.go:320] Caches are synced for resource quota
	I1004 03:48:04.401119       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 03:48:04.401275       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1004 03:48:04.410427       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 03:48:07.714757       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="36.966908ms"
	I1004 03:48:07.718287       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="74.385µs"
	I1004 03:48:07.789677       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="36.765034ms"
	I1004 03:48:07.789940       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="113.843µs"
	
	
	==> kube-controller-manager [d43427b82f7def3fbcbe40e387ba39f632412a2e72ed6e718610483c1cbff0ce] <==
	I1004 03:46:54.629541       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1004 03:46:54.632349       1 shared_informer.go:320] Caches are synced for resource quota
	I1004 03:46:54.676427       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1004 03:46:54.697679       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1004 03:46:54.700031       1 shared_informer.go:320] Caches are synced for resource quota
	I1004 03:46:54.769737       1 shared_informer.go:320] Caches are synced for attach detach
	I1004 03:46:55.164489       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 03:46:55.176015       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 03:46:55.176062       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1004 03:46:55.441657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-261592"
	I1004 03:46:55.714858       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="171.848392ms"
	I1004 03:46:55.728530       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="13.623638ms"
	I1004 03:46:55.747792       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="19.211047ms"
	I1004 03:46:55.747893       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.459µs"
	I1004 03:47:37.337472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-261592"
	I1004 03:47:37.348957       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-261592"
	I1004 03:47:37.357597       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="92.305µs"
	I1004 03:47:37.363535       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="50.6µs"
	I1004 03:47:37.376395       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="91.657µs"
	I1004 03:47:37.388007       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="103.751µs"
	I1004 03:47:38.759507       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.222µs"
	I1004 03:47:38.798210       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="20.274494ms"
	I1004 03:47:38.817984       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="19.658207ms"
	I1004 03:47:38.818270       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="57.057µs"
	I1004 03:47:39.482805       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c0e2e70ad4035836e7818c915e0db86d4485f6ff6afabe892db8d1e93822e1ea] <==
	I1004 03:46:56.454385       1 server_linux.go:66] "Using iptables proxy"
	I1004 03:46:56.548595       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.85.2"]
	E1004 03:46:56.548745       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:46:56.617529       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1004 03:46:56.617651       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:46:56.619524       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:46:56.619950       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:46:56.620124       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:46:56.621485       1 config.go:199] "Starting service config controller"
	I1004 03:46:56.621564       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:46:56.621622       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:46:56.621652       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:46:56.622221       1 config.go:328] "Starting node config controller"
	I1004 03:46:56.623855       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:46:56.723456       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:46:56.723510       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:46:56.725377       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e63018b158a7bd670279f3703aa2d851093804185d342910399a1086f42f07f5] <==
	I1004 03:47:55.848090       1 server_linux.go:66] "Using iptables proxy"
	I1004 03:48:01.388257       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.85.2"]
	E1004 03:48:01.388499       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:48:01.647585       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1004 03:48:01.648081       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:48:01.672560       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:48:01.673065       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:48:01.673341       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:48:01.674641       1 config.go:199] "Starting service config controller"
	I1004 03:48:01.674747       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:48:01.674833       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:48:01.674882       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:48:01.681112       1 config.go:328] "Starting node config controller"
	I1004 03:48:01.681267       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:48:01.775938       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:48:01.775988       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:48:01.783219       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0f52739bcc35ba86f784c063cd2903825f9bafc686012ce3c188937f35f5bb1d] <==
	E1004 03:46:47.874826       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:47.874865       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 03:46:47.874910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:47.874929       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 03:46:47.875005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:47.874982       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:46:47.875080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:47.875103       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:46:47.875157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:47.875161       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:46:47.875244       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:47.874884       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 03:46:47.875320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:47.875040       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 03:46:47.877385       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:48.739360       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 03:46:48.739477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:48.958253       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:46:48.958370       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:48.959524       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 03:46:48.959614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:49.009532       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 03:46:49.009582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1004 03:46:49.557603       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1004 03:47:42.458756       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fb198c02c4ed24f961fb6303226c64528c187c0f217f6e982bc160b51f2db2e1] <==
	I1004 03:47:56.948401       1 serving.go:386] Generated self-signed cert in-memory
	W1004 03:48:01.143488       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1004 03:48:01.143596       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 03:48:01.143633       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1004 03:48:01.143666       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 03:48:01.304546       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1004 03:48:01.304635       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:48:01.316170       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1004 03:48:01.319387       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1004 03:48:01.319426       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 03:48:01.319458       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1004 03:48:01.420598       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.813096    1523 status_manager.go:851] "Failed to get status for pod" podUID="3c8258a1-0c38-4c03-8d36-ee9b2606feb9" pod="kube-system/coredns-7c65d6cfc9-9n4vl" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9n4vl\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.813265    1523 status_manager.go:851] "Failed to get status for pod" podUID="9c0b7172-82ef-42e6-bf7e-126917a5f027" pod="kube-system/coredns-7c65d6cfc9-42rv6" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-42rv6\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.813420    1523 status_manager.go:851] "Failed to get status for pod" podUID="cf0f8906452862a19e15cc02d1dc003a" pod="kube-system/etcd-pause-261592" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-261592\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.813565    1523 status_manager.go:851] "Failed to get status for pod" podUID="cd9b2bf70ea7f34f08b9f659d966a9c0" pod="kube-system/kube-scheduler-pause-261592" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-261592\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.814753    1523 status_manager.go:851] "Failed to get status for pod" podUID="84cd1b956167e16e9e2a1ed0b5d101ce" pod="kube-system/kube-controller-manager-pause-261592" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-261592\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.820634    1523 status_manager.go:851] "Failed to get status for pod" podUID="e49267eabe20acbbe7e6af0123b5c4f9" pod="kube-system/kube-apiserver-pause-261592" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-261592\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.821031    1523 status_manager.go:851] "Failed to get status for pod" podUID="a063f599-caec-4865-9852-66e0765f7359" pod="kube-system/kindnet-srv54" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-srv54\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.821470    1523 status_manager.go:851] "Failed to get status for pod" podUID="7c42b79f-7f6b-4035-a550-f5c278021ea2" pod="kube-system/kube-proxy-k84f2" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k84f2\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.821755    1523 status_manager.go:851] "Failed to get status for pod" podUID="3c8258a1-0c38-4c03-8d36-ee9b2606feb9" pod="kube-system/coredns-7c65d6cfc9-9n4vl" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9n4vl\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.822022    1523 status_manager.go:851] "Failed to get status for pod" podUID="9c0b7172-82ef-42e6-bf7e-126917a5f027" pod="kube-system/coredns-7c65d6cfc9-42rv6" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-42rv6\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.822272    1523 status_manager.go:851] "Failed to get status for pod" podUID="cf0f8906452862a19e15cc02d1dc003a" pod="kube-system/etcd-pause-261592" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-261592\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.822526    1523 status_manager.go:851] "Failed to get status for pod" podUID="cd9b2bf70ea7f34f08b9f659d966a9c0" pod="kube-system/kube-scheduler-pause-261592" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-261592\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.822823    1523 status_manager.go:851] "Failed to get status for pod" podUID="cf0f8906452862a19e15cc02d1dc003a" pod="kube-system/etcd-pause-261592" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-261592\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.823103    1523 status_manager.go:851] "Failed to get status for pod" podUID="cd9b2bf70ea7f34f08b9f659d966a9c0" pod="kube-system/kube-scheduler-pause-261592" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-261592\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.823358    1523 status_manager.go:851] "Failed to get status for pod" podUID="84cd1b956167e16e9e2a1ed0b5d101ce" pod="kube-system/kube-controller-manager-pause-261592" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-261592\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.823638    1523 status_manager.go:851] "Failed to get status for pod" podUID="e49267eabe20acbbe7e6af0123b5c4f9" pod="kube-system/kube-apiserver-pause-261592" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-261592\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.823912    1523 status_manager.go:851] "Failed to get status for pod" podUID="a063f599-caec-4865-9852-66e0765f7359" pod="kube-system/kindnet-srv54" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-srv54\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.824187    1523 status_manager.go:851] "Failed to get status for pod" podUID="7c42b79f-7f6b-4035-a550-f5c278021ea2" pod="kube-system/kube-proxy-k84f2" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k84f2\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.824464    1523 status_manager.go:851] "Failed to get status for pod" podUID="3c8258a1-0c38-4c03-8d36-ee9b2606feb9" pod="kube-system/coredns-7c65d6cfc9-9n4vl" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9n4vl\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.824746    1523 status_manager.go:851] "Failed to get status for pod" podUID="9c0b7172-82ef-42e6-bf7e-126917a5f027" pod="kube-system/coredns-7c65d6cfc9-42rv6" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-42rv6\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:48:00 pause-261592 kubelet[1523]: E1004 03:48:00.666253    1523 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728013680666027306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125700,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:48:00 pause-261592 kubelet[1523]: E1004 03:48:00.666294    1523 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728013680666027306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125700,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:48:00 pause-261592 kubelet[1523]: E1004 03:48:00.918797    1523 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	Oct 04 03:48:00 pause-261592 kubelet[1523]: E1004 03:48:00.919525    1523 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	Oct 04 03:48:00 pause-261592 kubelet[1523]: E1004 03:48:00.919688    1523 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-261592 -n pause-261592
helpers_test.go:261: (dbg) Run:  kubectl --context pause-261592 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-261592
helpers_test.go:235: (dbg) docker inspect pause-261592:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d29fdc6e6055eb05112373ebc77fff9c70d5d6aa5c4911f3eba0e4c82eb1ac5",
	        "Created": "2024-10-04T03:46:24.601096244Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 182292,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-04T03:46:24.816583432Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/7d29fdc6e6055eb05112373ebc77fff9c70d5d6aa5c4911f3eba0e4c82eb1ac5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d29fdc6e6055eb05112373ebc77fff9c70d5d6aa5c4911f3eba0e4c82eb1ac5/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d29fdc6e6055eb05112373ebc77fff9c70d5d6aa5c4911f3eba0e4c82eb1ac5/hosts",
	        "LogPath": "/var/lib/docker/containers/7d29fdc6e6055eb05112373ebc77fff9c70d5d6aa5c4911f3eba0e4c82eb1ac5/7d29fdc6e6055eb05112373ebc77fff9c70d5d6aa5c4911f3eba0e4c82eb1ac5-json.log",
	        "Name": "/pause-261592",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-261592:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-261592",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a1effe3ed0b396717f632fce8d2c5360ecacac03af23b1791dbe7a27100f0585-init/diff:/var/lib/docker/overlay2/113409e5ac8a20e78db05ebf8d2720874d391240a7f47648e5e10a2a0c89288f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a1effe3ed0b396717f632fce8d2c5360ecacac03af23b1791dbe7a27100f0585/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a1effe3ed0b396717f632fce8d2c5360ecacac03af23b1791dbe7a27100f0585/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a1effe3ed0b396717f632fce8d2c5360ecacac03af23b1791dbe7a27100f0585/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-261592",
	                "Source": "/var/lib/docker/volumes/pause-261592/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-261592",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-261592",
	                "name.minikube.sigs.k8s.io": "pause-261592",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8e88b121bf20e8ab90009c2a0462f4fab2693d40d230d0fc17bba1a3df6eb2af",
	            "SandboxKey": "/var/run/docker/netns/8e88b121bf20",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33023"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33024"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33025"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-261592": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "791985b9a1c9615b77e486ac6441ab7cce98e79371b4be0d691b7e1e70cb02f4",
	                    "EndpointID": "08647013cbbafbc535742849826d771b05dd8715008398f52eaaf067bef6f716",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-261592",
	                        "7d29fdc6e605"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-261592 -n pause-261592
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-261592 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-261592 logs -n 25: (1.975377205s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p missing-upgrade-014414      | minikube                  | jenkins | v1.26.0 | 04 Oct 24 03:40 UTC | 04 Oct 24 03:42 UTC |
	|         | --memory=2200 --driver=docker  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-324508         | NoKubernetes-324508       | jenkins | v1.34.0 | 04 Oct 24 03:41 UTC | 04 Oct 24 03:41 UTC |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-324508         | NoKubernetes-324508       | jenkins | v1.34.0 | 04 Oct 24 03:41 UTC | 04 Oct 24 03:41 UTC |
	| start   | -p NoKubernetes-324508         | NoKubernetes-324508       | jenkins | v1.34.0 | 04 Oct 24 03:41 UTC | 04 Oct 24 03:41 UTC |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-324508 sudo    | NoKubernetes-324508       | jenkins | v1.34.0 | 04 Oct 24 03:41 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-324508         | NoKubernetes-324508       | jenkins | v1.34.0 | 04 Oct 24 03:42 UTC | 04 Oct 24 03:42 UTC |
	| start   | -p NoKubernetes-324508         | NoKubernetes-324508       | jenkins | v1.34.0 | 04 Oct 24 03:42 UTC | 04 Oct 24 03:42 UTC |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-324508 sudo    | NoKubernetes-324508       | jenkins | v1.34.0 | 04 Oct 24 03:42 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-324508         | NoKubernetes-324508       | jenkins | v1.34.0 | 04 Oct 24 03:42 UTC | 04 Oct 24 03:42 UTC |
	| start   | -p kubernetes-upgrade-904287   | kubernetes-upgrade-904287 | jenkins | v1.34.0 | 04 Oct 24 03:42 UTC | 04 Oct 24 03:43 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-014414      | missing-upgrade-014414    | jenkins | v1.34.0 | 04 Oct 24 03:42 UTC | 04 Oct 24 03:43 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-904287   | kubernetes-upgrade-904287 | jenkins | v1.34.0 | 04 Oct 24 03:43 UTC | 04 Oct 24 03:43 UTC |
	| start   | -p kubernetes-upgrade-904287   | kubernetes-upgrade-904287 | jenkins | v1.34.0 | 04 Oct 24 03:43 UTC | 04 Oct 24 03:48 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-014414      | missing-upgrade-014414    | jenkins | v1.34.0 | 04 Oct 24 03:43 UTC | 04 Oct 24 03:43 UTC |
	| start   | -p stopped-upgrade-917470      | minikube                  | jenkins | v1.26.0 | 04 Oct 24 03:43 UTC | 04 Oct 24 03:44 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --vm-driver=docker             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-917470 stop    | minikube                  | jenkins | v1.26.0 | 04 Oct 24 03:44 UTC | 04 Oct 24 03:44 UTC |
	| start   | -p stopped-upgrade-917470      | stopped-upgrade-917470    | jenkins | v1.34.0 | 04 Oct 24 03:44 UTC | 04 Oct 24 03:44 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-917470      | stopped-upgrade-917470    | jenkins | v1.34.0 | 04 Oct 24 03:44 UTC | 04 Oct 24 03:44 UTC |
	| start   | -p running-upgrade-505617      | minikube                  | jenkins | v1.26.0 | 04 Oct 24 03:44 UTC | 04 Oct 24 03:45 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --vm-driver=docker             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-505617      | running-upgrade-505617    | jenkins | v1.34.0 | 04 Oct 24 03:45 UTC | 04 Oct 24 03:46 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-505617      | running-upgrade-505617    | jenkins | v1.34.0 | 04 Oct 24 03:46 UTC | 04 Oct 24 03:46 UTC |
	| start   | -p pause-261592 --memory=2048  | pause-261592              | jenkins | v1.34.0 | 04 Oct 24 03:46 UTC | 04 Oct 24 03:47 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-261592                | pause-261592              | jenkins | v1.34.0 | 04 Oct 24 03:47 UTC | 04 Oct 24 03:48 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-904287   | kubernetes-upgrade-904287 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-904287   | kubernetes-upgrade-904287 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 03:48:04
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 03:48:04.672636  188340 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:48:04.672769  188340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:48:04.672779  188340 out.go:358] Setting ErrFile to fd 2...
	I1004 03:48:04.672785  188340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:48:04.673025  188340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-2238/.minikube/bin
	I1004 03:48:04.673437  188340 out.go:352] Setting JSON to false
	I1004 03:48:04.674382  188340 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5430,"bootTime":1728008255,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1004 03:48:04.674462  188340 start.go:139] virtualization:  
	I1004 03:48:04.677606  188340 out.go:177] * [kubernetes-upgrade-904287] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1004 03:48:04.681010  188340 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:48:04.681075  188340 notify.go:220] Checking for updates...
	I1004 03:48:04.686496  188340 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:48:04.689015  188340 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-2238/kubeconfig
	I1004 03:48:04.691539  188340 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-2238/.minikube
	I1004 03:48:04.694149  188340 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1004 03:48:04.696728  188340 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:48:04.699972  188340 config.go:182] Loaded profile config "kubernetes-upgrade-904287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:48:04.700867  188340 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:48:04.730249  188340 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1004 03:48:04.730367  188340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:48:04.787957  188340 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:62 SystemTime:2024-10-04 03:48:04.778060518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:48:04.788069  188340 docker.go:318] overlay module found
	I1004 03:48:04.790979  188340 out.go:177] * Using the docker driver based on existing profile
	I1004 03:48:04.793591  188340 start.go:297] selected driver: docker
	I1004 03:48:04.793613  188340 start.go:901] validating driver "docker" against &{Name:kubernetes-upgrade-904287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-904287 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:48:04.793748  188340 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:48:04.794386  188340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:48:04.844752  188340 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:62 SystemTime:2024-10-04 03:48:04.834438127 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:48:04.845136  188340 cni.go:84] Creating CNI manager for ""
	I1004 03:48:04.845190  188340 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1004 03:48:04.845378  188340 start.go:340] cluster config:
	{Name:kubernetes-upgrade-904287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-904287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:48:04.850024  188340 out.go:177] * Starting "kubernetes-upgrade-904287" primary control-plane node in "kubernetes-upgrade-904287" cluster
	I1004 03:48:04.853124  188340 cache.go:121] Beginning downloading kic base image for docker with crio
	I1004 03:48:04.856019  188340 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1004 03:48:04.858579  188340 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1004 03:48:04.858522  188340 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:48:04.858668  188340 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1004 03:48:04.858681  188340 cache.go:56] Caching tarball of preloaded images
	I1004 03:48:04.858788  188340 preload.go:172] Found /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1004 03:48:04.858797  188340 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:48:04.858899  188340 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/kubernetes-upgrade-904287/config.json ...
	I1004 03:48:04.879277  188340 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1004 03:48:04.879296  188340 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1004 03:48:04.879318  188340 cache.go:194] Successfully downloaded all kic artifacts
	I1004 03:48:04.879346  188340 start.go:360] acquireMachinesLock for kubernetes-upgrade-904287: {Name:mkd3caf1b1dafbdc83a0b9efd07903cf90ba4f07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:48:04.879400  188340 start.go:364] duration metric: took 33.829µs to acquireMachinesLock for "kubernetes-upgrade-904287"
	I1004 03:48:04.879419  188340 start.go:96] Skipping create...Using existing machine configuration
	I1004 03:48:04.879425  188340 fix.go:54] fixHost starting: 
	I1004 03:48:04.879735  188340 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-904287 --format={{.State.Status}}
	I1004 03:48:04.897322  188340 fix.go:112] recreateIfNeeded on kubernetes-upgrade-904287: state=Running err=<nil>
	W1004 03:48:04.897350  188340 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 03:48:04.900237  188340 out.go:177] * Updating the running docker "kubernetes-upgrade-904287" container ...
	I1004 03:48:01.165837  186480 node_ready.go:49] node "pause-261592" has status "Ready":"True"
	I1004 03:48:01.165861  186480 node_ready.go:38] duration metric: took 8.899138685s for node "pause-261592" to be "Ready" ...
	I1004 03:48:01.165870  186480 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:48:01.165911  186480 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1004 03:48:01.165923  186480 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1004 03:48:01.293984  186480 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-42rv6" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:01.357620  186480 pod_ready.go:93] pod "coredns-7c65d6cfc9-42rv6" in "kube-system" namespace has status "Ready":"True"
	I1004 03:48:01.357688  186480 pod_ready.go:82] duration metric: took 63.618359ms for pod "coredns-7c65d6cfc9-42rv6" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:01.357716  186480 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9n4vl" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:01.388354  186480 pod_ready.go:93] pod "coredns-7c65d6cfc9-9n4vl" in "kube-system" namespace has status "Ready":"True"
	I1004 03:48:01.388432  186480 pod_ready.go:82] duration metric: took 30.687433ms for pod "coredns-7c65d6cfc9-9n4vl" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:01.388468  186480 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-261592" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:01.408413  186480 pod_ready.go:93] pod "etcd-pause-261592" in "kube-system" namespace has status "Ready":"True"
	I1004 03:48:01.408485  186480 pod_ready.go:82] duration metric: took 19.986252ms for pod "etcd-pause-261592" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:01.408517  186480 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-261592" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:01.426612  186480 pod_ready.go:93] pod "kube-apiserver-pause-261592" in "kube-system" namespace has status "Ready":"True"
	I1004 03:48:01.426687  186480 pod_ready.go:82] duration metric: took 18.14772ms for pod "kube-apiserver-pause-261592" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:01.426718  186480 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-261592" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:03.443834  186480 pod_ready.go:103] pod "kube-controller-manager-pause-261592" in "kube-system" namespace has status "Ready":"False"
	I1004 03:48:05.451826  186480 pod_ready.go:103] pod "kube-controller-manager-pause-261592" in "kube-system" namespace has status "Ready":"False"
	I1004 03:48:05.934265  186480 pod_ready.go:93] pod "kube-controller-manager-pause-261592" in "kube-system" namespace has status "Ready":"True"
	I1004 03:48:05.934288  186480 pod_ready.go:82] duration metric: took 4.507549283s for pod "kube-controller-manager-pause-261592" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:05.934301  186480 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k84f2" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:05.941889  186480 pod_ready.go:93] pod "kube-proxy-k84f2" in "kube-system" namespace has status "Ready":"True"
	I1004 03:48:05.941910  186480 pod_ready.go:82] duration metric: took 7.601752ms for pod "kube-proxy-k84f2" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:05.941921  186480 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-261592" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:05.949683  186480 pod_ready.go:93] pod "kube-scheduler-pause-261592" in "kube-system" namespace has status "Ready":"True"
	I1004 03:48:05.949759  186480 pod_ready.go:82] duration metric: took 7.818788ms for pod "kube-scheduler-pause-261592" in "kube-system" namespace to be "Ready" ...
	I1004 03:48:05.949784  186480 pod_ready.go:39] duration metric: took 4.783901705s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:48:05.949828  186480 api_server.go:52] waiting for apiserver process to appear ...
	I1004 03:48:05.949923  186480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 03:48:05.972446  186480 api_server.go:72] duration metric: took 14.132740139s to wait for apiserver process to appear ...
	I1004 03:48:05.972521  186480 api_server.go:88] waiting for apiserver healthz status ...
	I1004 03:48:05.972558  186480 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1004 03:48:05.988185  186480 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1004 03:48:05.990310  186480 api_server.go:141] control plane version: v1.31.1
	I1004 03:48:05.990388  186480 api_server.go:131] duration metric: took 17.844722ms to wait for apiserver health ...
	I1004 03:48:05.990418  186480 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 03:48:05.998933  186480 system_pods.go:59] 8 kube-system pods found
	I1004 03:48:05.999020  186480 system_pods.go:61] "coredns-7c65d6cfc9-42rv6" [9c0b7172-82ef-42e6-bf7e-126917a5f027] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 03:48:05.999045  186480 system_pods.go:61] "coredns-7c65d6cfc9-9n4vl" [3c8258a1-0c38-4c03-8d36-ee9b2606feb9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 03:48:05.999084  186480 system_pods.go:61] "etcd-pause-261592" [2e607ad6-bed2-441f-a635-b3a7fcdb6127] Running
	I1004 03:48:05.999121  186480 system_pods.go:61] "kindnet-srv54" [a063f599-caec-4865-9852-66e0765f7359] Running
	I1004 03:48:05.999146  186480 system_pods.go:61] "kube-apiserver-pause-261592" [8f59c812-b539-431b-8f39-08013081ddbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 03:48:05.999184  186480 system_pods.go:61] "kube-controller-manager-pause-261592" [ce2f6ad2-7642-484e-be9c-9181227ed799] Running
	I1004 03:48:05.999207  186480 system_pods.go:61] "kube-proxy-k84f2" [7c42b79f-7f6b-4035-a550-f5c278021ea2] Running
	I1004 03:48:05.999244  186480 system_pods.go:61] "kube-scheduler-pause-261592" [b84c9d60-d1f3-4466-ae61-f001cff778b4] Running
	I1004 03:48:05.999270  186480 system_pods.go:74] duration metric: took 8.824677ms to wait for pod list to return data ...
	I1004 03:48:05.999293  186480 default_sa.go:34] waiting for default service account to be created ...
	I1004 03:48:06.170316  186480 default_sa.go:45] found service account: "default"
	I1004 03:48:06.170390  186480 default_sa.go:55] duration metric: took 171.061748ms for default service account to be created ...
	I1004 03:48:06.170415  186480 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 03:48:06.374023  186480 system_pods.go:86] 8 kube-system pods found
	I1004 03:48:06.374104  186480 system_pods.go:89] "coredns-7c65d6cfc9-42rv6" [9c0b7172-82ef-42e6-bf7e-126917a5f027] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 03:48:06.374131  186480 system_pods.go:89] "coredns-7c65d6cfc9-9n4vl" [3c8258a1-0c38-4c03-8d36-ee9b2606feb9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 03:48:06.374169  186480 system_pods.go:89] "etcd-pause-261592" [2e607ad6-bed2-441f-a635-b3a7fcdb6127] Running
	I1004 03:48:06.374194  186480 system_pods.go:89] "kindnet-srv54" [a063f599-caec-4865-9852-66e0765f7359] Running
	I1004 03:48:06.374219  186480 system_pods.go:89] "kube-apiserver-pause-261592" [8f59c812-b539-431b-8f39-08013081ddbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 03:48:06.374255  186480 system_pods.go:89] "kube-controller-manager-pause-261592" [ce2f6ad2-7642-484e-be9c-9181227ed799] Running
	I1004 03:48:06.374279  186480 system_pods.go:89] "kube-proxy-k84f2" [7c42b79f-7f6b-4035-a550-f5c278021ea2] Running
	I1004 03:48:06.374298  186480 system_pods.go:89] "kube-scheduler-pause-261592" [b84c9d60-d1f3-4466-ae61-f001cff778b4] Running
	I1004 03:48:06.374335  186480 system_pods.go:126] duration metric: took 203.900239ms to wait for k8s-apps to be running ...
	I1004 03:48:06.374362  186480 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 03:48:06.374451  186480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:48:06.389790  186480 system_svc.go:56] duration metric: took 15.419169ms WaitForService to wait for kubelet
	I1004 03:48:06.389816  186480 kubeadm.go:582] duration metric: took 14.550115674s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:48:06.389834  186480 node_conditions.go:102] verifying NodePressure condition ...
	I1004 03:48:06.570304  186480 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1004 03:48:06.570333  186480 node_conditions.go:123] node cpu capacity is 2
	I1004 03:48:06.570345  186480 node_conditions.go:105] duration metric: took 180.505493ms to run NodePressure ...
	I1004 03:48:06.570357  186480 start.go:241] waiting for startup goroutines ...
	I1004 03:48:06.570365  186480 start.go:246] waiting for cluster config update ...
	I1004 03:48:06.570372  186480 start.go:255] writing updated cluster config ...
	I1004 03:48:06.570701  186480 ssh_runner.go:195] Run: rm -f paused
	I1004 03:48:06.671683  186480 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 03:48:06.676903  186480 out.go:177] * Done! kubectl is now configured to use "pause-261592" cluster and "default" namespace by default
	I1004 03:48:04.903418  188340 machine.go:93] provisionDockerMachine start ...
	I1004 03:48:04.904737  188340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-904287
	I1004 03:48:04.923496  188340 main.go:141] libmachine: Using SSH client type: native
	I1004 03:48:04.923939  188340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1004 03:48:04.923954  188340 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 03:48:05.060913  188340 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-904287
	
	I1004 03:48:05.060938  188340 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-904287"
	I1004 03:48:05.061009  188340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-904287
	I1004 03:48:05.079018  188340 main.go:141] libmachine: Using SSH client type: native
	I1004 03:48:05.079324  188340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1004 03:48:05.079344  188340 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-904287 && echo "kubernetes-upgrade-904287" | sudo tee /etc/hostname
	I1004 03:48:05.226247  188340 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-904287
	
	I1004 03:48:05.226340  188340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-904287
	I1004 03:48:05.244721  188340 main.go:141] libmachine: Using SSH client type: native
	I1004 03:48:05.245132  188340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1004 03:48:05.245156  188340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-904287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-904287/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-904287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:48:05.382055  188340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:48:05.382080  188340 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19546-2238/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-2238/.minikube}
	I1004 03:48:05.382104  188340 ubuntu.go:177] setting up certificates
	I1004 03:48:05.382113  188340 provision.go:84] configureAuth start
	I1004 03:48:05.382171  188340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-904287
	I1004 03:48:05.406107  188340 provision.go:143] copyHostCerts
	I1004 03:48:05.406172  188340 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-2238/.minikube/ca.pem, removing ...
	I1004 03:48:05.406189  188340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.pem
	I1004 03:48:05.406264  188340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/ca.pem (1082 bytes)
	I1004 03:48:05.406371  188340 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-2238/.minikube/cert.pem, removing ...
	I1004 03:48:05.406377  188340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-2238/.minikube/cert.pem
	I1004 03:48:05.406405  188340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/cert.pem (1123 bytes)
	I1004 03:48:05.406464  188340 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-2238/.minikube/key.pem, removing ...
	I1004 03:48:05.406469  188340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-2238/.minikube/key.pem
	I1004 03:48:05.406493  188340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-2238/.minikube/key.pem (1679 bytes)
	I1004 03:48:05.406554  188340 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-904287 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-904287 localhost minikube]
	I1004 03:48:05.561958  188340 provision.go:177] copyRemoteCerts
	I1004 03:48:05.562050  188340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:48:05.562102  188340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-904287
	I1004 03:48:05.588309  188340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/kubernetes-upgrade-904287/id_rsa Username:docker}
	I1004 03:48:05.690550  188340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:48:05.718855  188340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1004 03:48:05.742974  188340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 03:48:05.767519  188340 provision.go:87] duration metric: took 385.393207ms to configureAuth
	I1004 03:48:05.767584  188340 ubuntu.go:193] setting minikube options for container-runtime
	I1004 03:48:05.767802  188340 config.go:182] Loaded profile config "kubernetes-upgrade-904287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:48:05.767929  188340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-904287
	I1004 03:48:05.784684  188340 main.go:141] libmachine: Using SSH client type: native
	I1004 03:48:05.784937  188340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1004 03:48:05.784959  188340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:48:06.249600  188340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:48:06.249633  188340 machine.go:96] duration metric: took 1.346160194s to provisionDockerMachine
	I1004 03:48:06.249646  188340 start.go:293] postStartSetup for "kubernetes-upgrade-904287" (driver="docker")
	I1004 03:48:06.249657  188340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:48:06.249720  188340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:48:06.249777  188340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-904287
	I1004 03:48:06.266603  188340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/kubernetes-upgrade-904287/id_rsa Username:docker}
	I1004 03:48:06.362980  188340 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:48:06.366210  188340 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1004 03:48:06.366244  188340 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1004 03:48:06.366255  188340 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1004 03:48:06.366261  188340 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1004 03:48:06.366272  188340 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-2238/.minikube/addons for local assets ...
	I1004 03:48:06.366329  188340 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-2238/.minikube/files for local assets ...
	I1004 03:48:06.366416  188340 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem -> 75602.pem in /etc/ssl/certs
	I1004 03:48:06.366531  188340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:48:06.379596  188340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem --> /etc/ssl/certs/75602.pem (1708 bytes)
	I1004 03:48:06.409150  188340 start.go:296] duration metric: took 159.489533ms for postStartSetup
	I1004 03:48:06.409290  188340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 03:48:06.409337  188340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-904287
	I1004 03:48:06.428513  188340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/kubernetes-upgrade-904287/id_rsa Username:docker}
	I1004 03:48:06.526279  188340 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1004 03:48:06.531202  188340 fix.go:56] duration metric: took 1.651770317s for fixHost
	I1004 03:48:06.531226  188340 start.go:83] releasing machines lock for "kubernetes-upgrade-904287", held for 1.651817183s
	I1004 03:48:06.531306  188340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-904287
	I1004 03:48:06.548783  188340 ssh_runner.go:195] Run: cat /version.json
	I1004 03:48:06.548833  188340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-904287
	I1004 03:48:06.549177  188340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:48:06.549283  188340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-904287
	I1004 03:48:06.566542  188340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/kubernetes-upgrade-904287/id_rsa Username:docker}
	I1004 03:48:06.576376  188340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/kubernetes-upgrade-904287/id_rsa Username:docker}
	I1004 03:48:06.746559  188340 ssh_runner.go:195] Run: systemctl --version
	I1004 03:48:07.100897  188340 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:48:07.205900  188340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1004 03:48:07.229487  188340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:48:07.260764  188340 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1004 03:48:07.260837  188340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:48:07.304068  188340 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1004 03:48:07.304089  188340 start.go:495] detecting cgroup driver to use...
	I1004 03:48:07.304119  188340 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1004 03:48:07.304179  188340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:48:07.353429  188340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:48:07.384464  188340 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:48:07.384522  188340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:48:07.402473  188340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:48:07.416428  188340 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:48:07.650117  188340 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:48:07.914397  188340 docker.go:233] disabling docker service ...
	I1004 03:48:07.914476  188340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:48:07.957777  188340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:48:07.990570  188340 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:48:08.227545  188340 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:48:08.455342  188340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:48:08.502706  188340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:48:08.571373  188340 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:48:08.571458  188340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:48:08.600369  188340 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:48:08.600441  188340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:48:08.635121  188340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:48:08.658564  188340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:48:08.682576  188340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:48:08.702533  188340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:48:08.738724  188340 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:48:08.768170  188340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:48:08.788977  188340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:48:08.811729  188340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:48:08.831427  188340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:48:09.059468  188340 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:48:09.347682  188340 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:48:09.347752  188340 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:48:09.351947  188340 start.go:563] Will wait 60s for crictl version
	I1004 03:48:09.352014  188340 ssh_runner.go:195] Run: which crictl
	I1004 03:48:09.355654  188340 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:48:09.403217  188340 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1004 03:48:09.403341  188340 ssh_runner.go:195] Run: crio --version
	I1004 03:48:09.472576  188340 ssh_runner.go:195] Run: crio --version
	I1004 03:48:09.539199  188340 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1004 03:48:09.541996  188340 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-904287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1004 03:48:09.570285  188340 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1004 03:48:09.574375  188340 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-904287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-904287 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 03:48:09.574488  188340 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:48:09.574546  188340 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:48:09.639596  188340 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 03:48:09.639616  188340 crio.go:433] Images already preloaded, skipping extraction
	I1004 03:48:09.639673  188340 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:48:09.715559  188340 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 03:48:09.715634  188340 cache_images.go:84] Images are preloaded, skipping loading
	I1004 03:48:09.715657  188340 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.31.1 crio true true} ...
	I1004 03:48:09.715785  188340 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-904287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-904287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:48:09.715898  188340 ssh_runner.go:195] Run: crio config
	I1004 03:48:09.809380  188340 cni.go:84] Creating CNI manager for ""
	I1004 03:48:09.809448  188340 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1004 03:48:09.809474  188340 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 03:48:09.809520  188340 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-904287 NodeName:kubernetes-upgrade-904287 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 03:48:09.809693  188340 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-904287"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 03:48:09.809785  188340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:48:09.832591  188340 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 03:48:09.832712  188340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 03:48:09.842478  188340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1004 03:48:09.864585  188340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:48:09.898495  188340 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I1004 03:48:09.916760  188340 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1004 03:48:09.920693  188340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:48:10.129065  188340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:48:10.145291  188340 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/kubernetes-upgrade-904287 for IP: 192.168.76.2
	I1004 03:48:10.145308  188340 certs.go:194] generating shared ca certs ...
	I1004 03:48:10.145323  188340 certs.go:226] acquiring lock for ca certs: {Name:mk468b07ab6620fd74cefc3667e1a8643008ce5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:48:10.145465  188340 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key
	I1004 03:48:10.145507  188340 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key
	I1004 03:48:10.145514  188340 certs.go:256] generating profile certs ...
	I1004 03:48:10.145594  188340 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/kubernetes-upgrade-904287/client.key
	I1004 03:48:10.145646  188340 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/kubernetes-upgrade-904287/apiserver.key.c3fd572c
	I1004 03:48:10.145682  188340 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/kubernetes-upgrade-904287/proxy-client.key
	I1004 03:48:10.145793  188340 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/7560.pem (1338 bytes)
	W1004 03:48:10.145821  188340 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-2238/.minikube/certs/7560_empty.pem, impossibly tiny 0 bytes
	I1004 03:48:10.145829  188340 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:48:10.145854  188340 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:48:10.145876  188340 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:48:10.145900  188340 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/certs/key.pem (1679 bytes)
	I1004 03:48:10.145940  188340 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem (1708 bytes)
	I1004 03:48:10.146540  188340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:48:10.171563  188340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 03:48:10.206914  188340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:48:10.246009  188340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 03:48:10.277963  188340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/kubernetes-upgrade-904287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1004 03:48:10.321608  188340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/kubernetes-upgrade-904287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 03:48:10.354346  188340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/kubernetes-upgrade-904287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:48:10.381768  188340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/kubernetes-upgrade-904287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:48:10.409104  188340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/certs/7560.pem --> /usr/share/ca-certificates/7560.pem (1338 bytes)
	I1004 03:48:10.436505  188340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/ssl/certs/75602.pem --> /usr/share/ca-certificates/75602.pem (1708 bytes)
	I1004 03:48:10.477852  188340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:48:10.510402  188340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 03:48:10.534627  188340 ssh_runner.go:195] Run: openssl version
	I1004 03:48:10.540856  188340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:48:10.553061  188340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:48:10.557461  188340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:48 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:48:10.557537  188340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:48:10.565986  188340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:48:10.576897  188340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7560.pem && ln -fs /usr/share/ca-certificates/7560.pem /etc/ssl/certs/7560.pem"
	I1004 03:48:10.593169  188340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7560.pem
	I1004 03:48:10.597754  188340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/7560.pem
	I1004 03:48:10.597822  188340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7560.pem
	I1004 03:48:10.605774  188340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7560.pem /etc/ssl/certs/51391683.0"
	I1004 03:48:10.633116  188340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75602.pem && ln -fs /usr/share/ca-certificates/75602.pem /etc/ssl/certs/75602.pem"
	I1004 03:48:10.648959  188340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75602.pem
	I1004 03:48:10.653104  188340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/75602.pem
	I1004 03:48:10.653171  188340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75602.pem
	I1004 03:48:10.661793  188340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75602.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:48:10.671852  188340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:48:10.676011  188340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 03:48:10.692124  188340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 03:48:10.699490  188340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 03:48:10.706726  188340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 03:48:10.714039  188340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 03:48:10.721583  188340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 03:48:10.728349  188340 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-904287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-904287 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:48:10.728450  188340 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 03:48:10.728519  188340 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 03:48:10.773389  188340 cri.go:89] found id: "f63ce8fb264d5a2fb48375d7dc3e62e0bf2f3e891ec31247af96514919051c6e"
	I1004 03:48:10.773465  188340 cri.go:89] found id: "dc325c79b0cca7d77ccfb708045284f874967ec478fd6ea8e36bc306f94be017"
	I1004 03:48:10.773485  188340 cri.go:89] found id: "2bc3e7fa4bb0717d4ecc5dd76ee7523bb586a1f92d7643073b0f9d69774364cd"
	I1004 03:48:10.773506  188340 cri.go:89] found id: "21a5c2c21eb90700908309dd8c8a4c9a6887c1d14485cb1ad01c2bbca98faedf"
	I1004 03:48:10.773545  188340 cri.go:89] found id: ""
	I1004 03:48:10.773628  188340 ssh_runner.go:195] Run: sudo runc list -f json
	I1004 03:48:10.794853  188340 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"21a5c2c21eb90700908309dd8c8a4c9a6887c1d14485cb1ad01c2bbca98faedf","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/21a5c2c21eb90700908309dd8c8a4c9a6887c1d14485cb1ad01c2bbca98faedf/userdata","rootfs":"/var/lib/containers/storage/overlay/c519d211a1ed2470b172657c64595771ae0f501879d4594df18de30d3a52fe3a/merged","created":"2024-10-04T03:48:06.836385181Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"cdf7d3fa","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"cdf7d3fa\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePo
licy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"21a5c2c21eb90700908309dd8c8a4c9a6887c1d14485cb1ad01c2bbca98faedf","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-10-04T03:48:06.629622429Z","io.kubernetes.cri-o.Image":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.15-0","io.kubernetes.cri-o.ImageRef":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-904287\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7de80d699f4346a4089fa77749d8345a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-904287_7de80d699f4346a4089fa77749d8345a/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/va
r/lib/containers/storage/overlay/c519d211a1ed2470b172657c64595771ae0f501879d4594df18de30d3a52fe3a/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-kubernetes-upgrade-904287_kube-system_7de80d699f4346a4089fa77749d8345a_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/eae842e01f7f638412a12e363f53e24557e18dc1eda33b5e22bd9f1a85f7e17b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"eae842e01f7f638412a12e363f53e24557e18dc1eda33b5e22bd9f1a85f7e17b","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-904287_kube-system_7de80d699f4346a4089fa77749d8345a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/7de80d699f4346a4089fa77749d8345a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_p
ath\":\"/var/lib/kubelet/pods/7de80d699f4346a4089fa77749d8345a/containers/etcd/e5e97ac0\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-904287","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7de80d699f4346a4089fa77749d8345a","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.76.2:2379","kubernetes.io/config.hash":"7de80d699f4346a4089fa77749d8345a","kubernetes.io/config.seen":"2024-10-04T03:47:51.414750231Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2bc3e7fa4bb0717d4ecc5dd76ee7523bb586a1f92d7643073b0f9d
69774364cd","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/2bc3e7fa4bb0717d4ecc5dd76ee7523bb586a1f92d7643073b0f9d69774364cd/userdata","rootfs":"/var/lib/containers/storage/overlay/604f7ddc500ee4aff886262c41492f8881a70975228779b2259275b394cbf2df/merged","created":"2024-10-04T03:48:06.780304474Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"12faacf7","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"12faacf7\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2bc3e7fa4bb0717d4ecc5dd76ee752
3bb586a1f92d7643073b0f9d69774364cd","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-10-04T03:48:06.645033919Z","io.kubernetes.cri-o.Image":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.31.1","io.kubernetes.cri-o.ImageRef":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-904287\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ac5556613d28163ae907018897aea895\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-904287_ac5556613d28163ae907018897aea895/kube-scheduler/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/604f7ddc500ee4aff886262c41492f8881a70975228
779b2259275b394cbf2df/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-kubernetes-upgrade-904287_kube-system_ac5556613d28163ae907018897aea895_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/dc072c841e741cdc0809d58a77928a741dfad37457fc0ac9b1d935ca08a620eb/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"dc072c841e741cdc0809d58a77928a741dfad37457fc0ac9b1d935ca08a620eb","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-904287_kube-system_ac5556613d28163ae907018897aea895_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ac5556613d28163ae907018897aea895/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ac5556613d28163a
e907018897aea895/containers/kube-scheduler/1d78db33\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-904287","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ac5556613d28163ae907018897aea895","kubernetes.io/config.hash":"ac5556613d28163ae907018897aea895","kubernetes.io/config.seen":"2024-10-04T03:47:51.414748640Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dc325c79b0cca7d77ccfb708045284f874967ec478fd6ea8e36bc306f94be017","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/dc325c79b0cca7d77ccfb708045284f874967ec478fd6ea8e36bc306f94be017/userdata","rootfs":"/var/lib/containers/storage/overlay/cf64d8972888e1b5f1849231b7fb070d9b6c0af3f
d600511950e71612b7546d4/merged","created":"2024-10-04T03:48:06.874963861Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7df2713b","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"6","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7df2713b\",\"io.kubernetes.container.restartCount\":\"6\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"dc325c79b0cca7d77ccfb708045284f874967ec478fd6ea8e36bc306f94be017","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-10-04T03:48:06.751169285Z","io.kubernetes.cri-o.Image":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","io.kubernet
es.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.31.1","io.kubernetes.cri-o.ImageRef":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-904287\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5128bf4866a6386415dbb62a3acdac0c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-904287_5128bf4866a6386415dbb62a3acdac0c/kube-apiserver/6.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":6}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cf64d8972888e1b5f1849231b7fb070d9b6c0af3fd600511950e71612b7546d4/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-904287_kube-system_5128bf4866a6386415dbb62a3acdac0c_6","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/84a210e4453
4c4a73766c75936e8a8e3af1e2609c8e3b572afad5bf82a71ba0c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"84a210e44534c4a73766c75936e8a8e3af1e2609c8e3b572afad5bf82a71ba0c","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-904287_kube-system_5128bf4866a6386415dbb62a3acdac0c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5128bf4866a6386415dbb62a3acdac0c/containers/kube-apiserver/60e17e17\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5128bf4866a6386415dbb62a3acdac0c/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabe
l\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-904287","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5128bf4866a6386415dbb62a3acdac0c","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.76.2:8443","kubernetes.io/config.hash":"5128bf4866a6386415dbb62a3acdac0c","kubernetes.io/config.see
n":"2024-10-04T03:47:51.414735963Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f63ce8fb264d5a2fb48375d7dc3e62e0bf2f3e891ec31247af96514919051c6e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/f63ce8fb264d5a2fb48375d7dc3e62e0bf2f3e891ec31247af96514919051c6e/userdata","rootfs":"/var/lib/containers/storage/overlay/afd6c928c1a9f81cb75fbebd0bc8c119f53f42ab87696a72488ec70ddc7ecb6c/merged","created":"2024-10-04T03:48:06.930016288Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d1900d79","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"6","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d1900d79\",\"io.kubernetes.container.restartCount\":\"6\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-lo
g\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f63ce8fb264d5a2fb48375d7dc3e62e0bf2f3e891ec31247af96514919051c6e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-10-04T03:48:06.758086099Z","io.kubernetes.cri-o.Image":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.31.1","io.kubernetes.cri-o.ImageRef":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-904287\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4bb7f8dcb595cab608e84eccca627cd9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-904287_4bb7f8dcb595cab608e84ecc
ca627cd9/kube-controller-manager/6.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":6}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/afd6c928c1a9f81cb75fbebd0bc8c119f53f42ab87696a72488ec70ddc7ecb6c/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-904287_kube-system_4bb7f8dcb595cab608e84eccca627cd9_6","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/1bafff56dc583bcec537d319e533171fc08dac101049cdaa190a244608d354fa/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"1bafff56dc583bcec537d319e533171fc08dac101049cdaa190a244608d354fa","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-904287_kube-system_4bb7f8dcb595cab608e84eccca627cd9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_pa
th\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4bb7f8dcb595cab608e84eccca627cd9/containers/kube-controller-manager/1c1440a3\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4bb7f8dcb595cab608e84eccca627cd9/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"co
ntainer_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-904287","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4bb7f8dcb595cab608e84eccca627cd9","kubernetes.io/config.hash":"4bb7f8dcb595cab608e84eccca627cd9","kubernetes.io/config.seen":"2024-10-04T03:47:51.414747286Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I1004 03:48:10.795163  188340 cri.go:126] list returned 4 containers
	I1004 03:48:10.795173  188340 cri.go:129] container: {ID:21a5c2c21eb90700908309dd8c8a4c9a6887c1d14485cb1ad01c2bbca98faedf Status:stopped}
	I1004 03:48:10.795192  188340 cri.go:135] skipping {21a5c2c21eb90700908309dd8c8a4c9a6887c1d14485cb1ad01c2bbca98faedf stopped}: state = "stopped", want "paused"
	I1004 03:48:10.795201  188340 cri.go:129] container: {ID:2bc3e7fa4bb0717d4ecc5dd76ee7523bb586a1f92d7643073b0f9d69774364cd Status:stopped}
	I1004 03:48:10.795207  188340 cri.go:135] skipping {2bc3e7fa4bb0717d4ecc5dd76ee7523bb586a1f92d7643073b0f9d69774364cd stopped}: state = "stopped", want "paused"
	I1004 03:48:10.795212  188340 cri.go:129] container: {ID:dc325c79b0cca7d77ccfb708045284f874967ec478fd6ea8e36bc306f94be017 Status:stopped}
	I1004 03:48:10.795217  188340 cri.go:135] skipping {dc325c79b0cca7d77ccfb708045284f874967ec478fd6ea8e36bc306f94be017 stopped}: state = "stopped", want "paused"
	I1004 03:48:10.795223  188340 cri.go:129] container: {ID:f63ce8fb264d5a2fb48375d7dc3e62e0bf2f3e891ec31247af96514919051c6e Status:stopped}
	I1004 03:48:10.795229  188340 cri.go:135] skipping {f63ce8fb264d5a2fb48375d7dc3e62e0bf2f3e891ec31247af96514919051c6e stopped}: state = "stopped", want "paused"
	I1004 03:48:10.795281  188340 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 03:48:10.815410  188340 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 03:48:10.815428  188340 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 03:48:10.815479  188340 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 03:48:10.829592  188340 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 03:48:10.830293  188340 kubeconfig.go:125] found "kubernetes-upgrade-904287" server: "https://192.168.76.2:8443"
	I1004 03:48:10.831380  188340 kapi.go:59] client config for kubernetes-upgrade-904287: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19546-2238/.minikube/profiles/kubernetes-upgrade-904287/client.crt", KeyFile:"/home/jenkins/minikube-integration/19546-2238/.minikube/profiles/kubernetes-upgrade-904287/client.key", CAFile:"/home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a17550), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 03:48:10.832021  188340 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 03:48:10.841296  188340 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I1004 03:48:10.841326  188340 kubeadm.go:597] duration metric: took 25.892699ms to restartPrimaryControlPlane
	I1004 03:48:10.841335  188340 kubeadm.go:394] duration metric: took 112.998139ms to StartCluster
	I1004 03:48:10.841350  188340 settings.go:142] acquiring lock: {Name:mk9c80036423f55b2143f3dcbc4f16f5b78f75ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:48:10.841421  188340 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-2238/kubeconfig
	I1004 03:48:10.842350  188340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/kubeconfig: {Name:mkd1a87175976669e9a14c51acaef20b883a2130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:48:10.842578  188340 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:48:10.842813  188340 config.go:182] Loaded profile config "kubernetes-upgrade-904287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:48:10.842858  188340 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 03:48:10.842921  188340 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-904287"
	I1004 03:48:10.842935  188340 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-904287"
	W1004 03:48:10.842942  188340 addons.go:243] addon storage-provisioner should already be in state true
	I1004 03:48:10.842974  188340 host.go:66] Checking if "kubernetes-upgrade-904287" exists ...
	I1004 03:48:10.843391  188340 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-904287 --format={{.State.Status}}
	I1004 03:48:10.843631  188340 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-904287"
	I1004 03:48:10.843669  188340 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-904287"
	I1004 03:48:10.844111  188340 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-904287 --format={{.State.Status}}
	I1004 03:48:10.851746  188340 out.go:177] * Verifying Kubernetes components...
	I1004 03:48:10.865481  188340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:48:10.895073  188340 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Oct 04 03:47:51 pause-261592 crio[2430]: time="2024-10-04 03:47:51.449919725Z" level=info msg="Started container" PID=2619 containerID=6e2a5d0b9ab3eb5a96949e350a2f0eec6b4d45fd54384d9fdf9fb6b033044085 description=kube-system/kube-controller-manager-pause-261592/kube-controller-manager id=771738e9-92d2-49fc-b767-e364d91dc6a7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5015fc0f756f65f64c10f8ac76db6b5d66c8011d5e10b8cf9c1369465196bdd6
	Oct 04 03:47:51 pause-261592 crio[2430]: time="2024-10-04 03:47:51.540745017Z" level=info msg="Created container 16f000d572a96304522fbd46b6ebb85bf3fe6123a4f86bf24eeb844063bf5a71: kube-system/kube-apiserver-pause-261592/kube-apiserver" id=c6ecbf4c-b9ff-4b9b-9e95-9b612a4c7568 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 04 03:47:51 pause-261592 crio[2430]: time="2024-10-04 03:47:51.541638803Z" level=info msg="Starting container: 16f000d572a96304522fbd46b6ebb85bf3fe6123a4f86bf24eeb844063bf5a71" id=b9bd6d8a-8d8b-40c2-bfc5-1b7e28a94543 name=/runtime.v1.RuntimeService/StartContainer
	Oct 04 03:47:51 pause-261592 crio[2430]: time="2024-10-04 03:47:51.549551470Z" level=info msg="Created container fb198c02c4ed24f961fb6303226c64528c187c0f217f6e982bc160b51f2db2e1: kube-system/kube-scheduler-pause-261592/kube-scheduler" id=3953d026-49f0-4c44-836b-6b9c4a6acefc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 04 03:47:51 pause-261592 crio[2430]: time="2024-10-04 03:47:51.550200626Z" level=info msg="Starting container: fb198c02c4ed24f961fb6303226c64528c187c0f217f6e982bc160b51f2db2e1" id=b7e84d80-fb66-4a3e-9ef8-03ed63b32095 name=/runtime.v1.RuntimeService/StartContainer
	Oct 04 03:47:51 pause-261592 crio[2430]: time="2024-10-04 03:47:51.555472216Z" level=info msg="Created container 6e45908eccf2425f42fda0e757ce218c98526882f85c4417b7e5dd06150804d4: kube-system/coredns-7c65d6cfc9-42rv6/coredns" id=1e3bef98-cb67-4941-969f-178a70962f45 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 04 03:47:51 pause-261592 crio[2430]: time="2024-10-04 03:47:51.556158228Z" level=info msg="Starting container: 6e45908eccf2425f42fda0e757ce218c98526882f85c4417b7e5dd06150804d4" id=1906e354-fcc1-4ff0-92dc-ef65de86ae0a name=/runtime.v1.RuntimeService/StartContainer
	Oct 04 03:47:51 pause-261592 crio[2430]: time="2024-10-04 03:47:51.569749456Z" level=info msg="Started container" PID=2714 containerID=16f000d572a96304522fbd46b6ebb85bf3fe6123a4f86bf24eeb844063bf5a71 description=kube-system/kube-apiserver-pause-261592/kube-apiserver id=b9bd6d8a-8d8b-40c2-bfc5-1b7e28a94543 name=/runtime.v1.RuntimeService/StartContainer sandboxID=75312aca7f362ccfeb58e52c61f54ca5e5c166f153dc9f29163c90ddc460e347
	Oct 04 03:47:51 pause-261592 crio[2430]: time="2024-10-04 03:47:51.574394592Z" level=info msg="Started container" PID=2641 containerID=fb198c02c4ed24f961fb6303226c64528c187c0f217f6e982bc160b51f2db2e1 description=kube-system/kube-scheduler-pause-261592/kube-scheduler id=b7e84d80-fb66-4a3e-9ef8-03ed63b32095 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d9b6da3b39af1b4419516eb76cf4c25d86d5723f9abfd18fc27c8ddff8b5e55
	Oct 04 03:47:51 pause-261592 crio[2430]: time="2024-10-04 03:47:51.606284493Z" level=info msg="Started container" PID=2750 containerID=6e45908eccf2425f42fda0e757ce218c98526882f85c4417b7e5dd06150804d4 description=kube-system/coredns-7c65d6cfc9-42rv6/coredns id=1906e354-fcc1-4ff0-92dc-ef65de86ae0a name=/runtime.v1.RuntimeService/StartContainer sandboxID=5c79a618d02624fd4d66f31b14489fb4b8ee1cad6bfa65253c5094ea36627a59
	Oct 04 03:47:52 pause-261592 crio[2430]: time="2024-10-04 03:47:52.022221885Z" level=info msg="Created container e63018b158a7bd670279f3703aa2d851093804185d342910399a1086f42f07f5: kube-system/kube-proxy-k84f2/kube-proxy" id=686324ba-d342-479b-92c1-5d8cdc0cdb2c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 04 03:47:52 pause-261592 crio[2430]: time="2024-10-04 03:47:52.023497470Z" level=info msg="Starting container: e63018b158a7bd670279f3703aa2d851093804185d342910399a1086f42f07f5" id=83943def-4d88-4cfb-9ecc-e07961a5fc9f name=/runtime.v1.RuntimeService/StartContainer
	Oct 04 03:47:52 pause-261592 crio[2430]: time="2024-10-04 03:47:52.278812962Z" level=info msg="Started container" PID=2720 containerID=e63018b158a7bd670279f3703aa2d851093804185d342910399a1086f42f07f5 description=kube-system/kube-proxy-k84f2/kube-proxy id=83943def-4d88-4cfb-9ecc-e07961a5fc9f name=/runtime.v1.RuntimeService/StartContainer sandboxID=2f9f23612631fecc50dc6c548f839c9511cc9c19a2bb2d72a0d4c94e72a98301
	Oct 04 03:48:01 pause-261592 crio[2430]: time="2024-10-04 03:48:01.974054571Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Oct 04 03:48:01 pause-261592 crio[2430]: time="2024-10-04 03:48:01.993352151Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 04 03:48:01 pause-261592 crio[2430]: time="2024-10-04 03:48:01.993385316Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 04 03:48:01 pause-261592 crio[2430]: time="2024-10-04 03:48:01.993401077Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Oct 04 03:48:02 pause-261592 crio[2430]: time="2024-10-04 03:48:02.011399189Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 04 03:48:02 pause-261592 crio[2430]: time="2024-10-04 03:48:02.011433420Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 04 03:48:02 pause-261592 crio[2430]: time="2024-10-04 03:48:02.011449321Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Oct 04 03:48:02 pause-261592 crio[2430]: time="2024-10-04 03:48:02.030440391Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 04 03:48:02 pause-261592 crio[2430]: time="2024-10-04 03:48:02.030477461Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 04 03:48:02 pause-261592 crio[2430]: time="2024-10-04 03:48:02.030494355Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Oct 04 03:48:02 pause-261592 crio[2430]: time="2024-10-04 03:48:02.049180449Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 04 03:48:02 pause-261592 crio[2430]: time="2024-10-04 03:48:02.049471239Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6e45908eccf24       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   20 seconds ago       Running             coredns                   1                   5c79a618d0262       coredns-7c65d6cfc9-42rv6
	16f000d572a96       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   20 seconds ago       Running             kube-apiserver            1                   75312aca7f362       kube-apiserver-pause-261592
	e63018b158a7b       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d   20 seconds ago       Running             kube-proxy                1                   2f9f23612631f       kube-proxy-k84f2
	1bca1e623975c       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51   20 seconds ago       Running             kindnet-cni               1                   79aac73c73747       kindnet-srv54
	02461d0ecefa6       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da   20 seconds ago       Running             etcd                      1                   087d654c5df5d       etcd-pause-261592
	ba30065a5829c       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   20 seconds ago       Running             coredns                   1                   b2eb1f203942d       coredns-7c65d6cfc9-9n4vl
	6e2a5d0b9ab3e       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   20 seconds ago       Running             kube-controller-manager   1                   5015fc0f756f6       kube-controller-manager-pause-261592
	fb198c02c4ed2       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d   20 seconds ago       Running             kube-scheduler            1                   1d9b6da3b39af       kube-scheduler-pause-261592
	5c89115e27ff1       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   34 seconds ago       Exited              coredns                   0                   5c79a618d0262       coredns-7c65d6cfc9-42rv6
	20c26555e198e       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   34 seconds ago       Exited              coredns                   0                   b2eb1f203942d       coredns-7c65d6cfc9-9n4vl
	c0e2e70ad4035       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d   About a minute ago   Exited              kube-proxy                0                   2f9f23612631f       kube-proxy-k84f2
	ffb8dcb7cdf70       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51   About a minute ago   Exited              kindnet-cni               0                   79aac73c73747       kindnet-srv54
	62740614906c4       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da   About a minute ago   Exited              etcd                      0                   087d654c5df5d       etcd-pause-261592
	791224c1c5bea       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   About a minute ago   Exited              kube-apiserver            0                   75312aca7f362       kube-apiserver-pause-261592
	d43427b82f7de       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   About a minute ago   Exited              kube-controller-manager   0                   5015fc0f756f6       kube-controller-manager-pause-261592
	0f52739bcc35b       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d   About a minute ago   Exited              kube-scheduler            0                   1d9b6da3b39af       kube-scheduler-pause-261592
	
	
	==> coredns [20c26555e198ebf41e6877314f76c6c3ec980e2db646eddbd5241397bbd47b93] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49372 - 12140 "HINFO IN 4977033123192698539.7224489375946866723. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019174511s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5c89115e27ff1803f2493901481a19373c763d1e1cf1b45b61c9f244f35a1f17] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:32891 - 56679 "HINFO IN 5884894497945970033.5634341614239618359. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020473488s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6e45908eccf2425f42fda0e757ce218c98526882f85c4417b7e5dd06150804d4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40796 - 21458 "HINFO IN 5834418650777757923.768020086510170513. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.031987598s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ba30065a5829c70a38c30a7b011095b86cf35e7e644f2c47c8f58dd95b27ed2f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:45938 - 32599 "HINFO IN 907179989567173052.217953983079795387. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.017102979s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               pause-261592
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-261592
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=pause-261592
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T03_46_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:46:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-261592
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:48:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:47:37 +0000   Fri, 04 Oct 2024 03:46:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:47:37 +0000   Fri, 04 Oct 2024 03:46:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:47:37 +0000   Fri, 04 Oct 2024 03:46:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:47:37 +0000   Fri, 04 Oct 2024 03:47:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-261592
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2df4eb40c314411ac012375b7a19ec0
	  System UUID:                e601b0c3-07cc-400d-8910-290003d13814
	  Boot ID:                    cc975b9c-d4f7-443e-a63b-68cdfd7ad286
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-42rv6                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     77s
	  kube-system                 coredns-7c65d6cfc9-9n4vl                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     77s
	  kube-system                 etcd-pause-261592                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         82s
	  kube-system                 kindnet-srv54                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      77s
	  kube-system                 kube-apiserver-pause-261592             250m (12%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-controller-manager-pause-261592    200m (10%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-proxy-k84f2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-pause-261592             100m (5%)     0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 75s   kube-proxy       
	  Normal   Starting                 10s   kube-proxy       
	  Normal   Starting                 82s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 82s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  82s   kubelet          Node pause-261592 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    82s   kubelet          Node pause-261592 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     82s   kubelet          Node pause-261592 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           78s   node-controller  Node pause-261592 event: Registered Node pause-261592 in Controller
	  Normal   NodeReady                35s   kubelet          Node pause-261592 status is now: NodeReady
	  Normal   RegisteredNode           9s    node-controller  Node pause-261592 event: Registered Node pause-261592 in Controller
	
	
	==> dmesg <==
	[Oct 4 02:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015570] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.529270] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.049348] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.015318] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.608453] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.834894] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 4 03:11] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 4 03:45] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [02461d0ecefa6fc0843c5040172ef8feae2e780ed9db3d77adfa73f1de49e8b5] <==
	{"level":"info","ts":"2024-10-04T03:47:51.745389Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-04T03:47:51.745540Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-04T03:47:51.745578Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-04T03:47:51.746726Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:47:51.751899Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-04T03:47:51.754769Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-10-04T03:47:51.761411Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-10-04T03:47:51.762865Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-04T03:47:51.762958Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-04T03:47:51.870968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-04T03:47:51.871083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-04T03:47:51.871155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-10-04T03:47:51.871211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2024-10-04T03:47:51.871250Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2024-10-04T03:47:51.871308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2024-10-04T03:47:51.871341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2024-10-04T03:47:51.879389Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-261592 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T03:47:51.879443Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:47:51.879855Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T03:47:51.879962Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T03:47:51.887325Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:47:51.888286Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:47:51.889151Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:47:51.893947Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:47:51.894799Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> etcd [62740614906c4108c264aa4ee766e9fff025c5ef0762e785b5a44f65ec991081] <==
	{"level":"info","ts":"2024-10-04T03:46:43.605333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2024-10-04T03:46:43.605341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-10-04T03:46:43.609430Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-261592 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T03:46:43.609471Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:46:43.609727Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:46:43.612468Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:46:43.613429Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:46:43.614472Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:46:43.615400Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:46:43.622448Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2024-10-04T03:46:43.617432Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:46:43.623053Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:46:43.623126Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:46:43.626664Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T03:46:43.626735Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T03:47:42.459423Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-04T03:47:42.459492Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-261592","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"warn","ts":"2024-10-04T03:47:42.459579Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:47:42.459672Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:47:42.556006Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:47:42.556072Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-04T03:47:42.556135Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2024-10-04T03:47:42.558203Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-10-04T03:47:42.558356Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-10-04T03:47:42.558371Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-261592","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 03:48:12 up  1:30,  0 users,  load average: 5.40, 3.35, 2.49
	Linux pause-261592 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1bca1e623975c87aca561b3275a5bc55990583c7a23ab4d23b7595824e768c0a] <==
	I1004 03:47:51.527746       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1004 03:47:51.532039       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1004 03:47:51.532258       1 main.go:148] setting mtu 1500 for CNI 
	I1004 03:47:51.532307       1 main.go:178] kindnetd IP family: "ipv4"
	I1004 03:47:51.532349       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1004 03:47:51.974021       1 controller.go:334] Starting controller kube-network-policies
	I1004 03:47:52.016198       1 controller.go:338] Waiting for informer caches to sync
	I1004 03:47:52.021316       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I1004 03:48:01.321806       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1004 03:48:01.321860       1 metrics.go:61] Registering metrics
	I1004 03:48:01.321941       1 controller.go:374] Syncing nftables rules
	I1004 03:48:01.973755       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1004 03:48:01.973845       1 main.go:299] handling current node
	I1004 03:48:11.975000       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1004 03:48:11.975064       1 main.go:299] handling current node
	
	
	==> kindnet [ffb8dcb7cdf70e6e7e692b2a1a724f48acf77856b5fc252be5e579d9316c71b8] <==
	W1004 03:47:26.819821       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1004 03:47:26.819913       1 trace.go:236] Trace[398187072]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (04-Oct-2024 03:46:56.818) (total time: 30001ms):
	Trace[398187072]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:47:26.819)
	Trace[398187072]: [30.00112748s] [30.00112748s] END
	E1004 03:47:26.819937       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	W1004 03:47:26.819825       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	W1004 03:47:26.820011       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1004 03:47:26.820066       1 trace.go:236] Trace[632445758]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (04-Oct-2024 03:46:56.819) (total time: 30000ms):
	Trace[632445758]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (03:47:26.820)
	Trace[632445758]: [30.000840621s] [30.000840621s] END
	E1004 03:47:26.820081       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1004 03:47:26.820037       1 trace.go:236] Trace[1197916360]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (04-Oct-2024 03:46:56.818) (total time: 30001ms):
	Trace[1197916360]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:47:26.819)
	Trace[1197916360]: [30.001518461s] [30.001518461s] END
	E1004 03:47:26.820094       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	W1004 03:47:26.820218       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1004 03:47:26.820268       1 trace.go:236] Trace[258284427]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (04-Oct-2024 03:46:56.819) (total time: 30001ms):
	Trace[258284427]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:47:26.820)
	Trace[258284427]: [30.00110097s] [30.00110097s] END
	E1004 03:47:26.820284       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1004 03:47:28.318774       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1004 03:47:28.318822       1 metrics.go:61] Registering metrics
	I1004 03:47:28.318882       1 controller.go:374] Syncing nftables rules
	I1004 03:47:36.825274       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1004 03:47:36.825336       1 main.go:299] handling current node
	
	
	==> kube-apiserver [16f000d572a96304522fbd46b6ebb85bf3fe6123a4f86bf24eeb844063bf5a71] <==
	I1004 03:48:00.799840       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1004 03:48:00.799850       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1004 03:48:00.743809       1 controller.go:119] Starting legacy_token_tracking_controller
	I1004 03:48:01.057872       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I1004 03:48:01.210902       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1004 03:48:01.265633       1 shared_informer.go:320] Caches are synced for configmaps
	I1004 03:48:01.302692       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 03:48:01.302720       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 03:48:01.303107       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1004 03:48:01.303375       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1004 03:48:01.309779       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1004 03:48:01.309843       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1004 03:48:01.310182       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1004 03:48:01.310204       1 policy_source.go:224] refreshing policies
	I1004 03:48:01.310820       1 aggregator.go:171] initial CRD sync complete...
	I1004 03:48:01.310840       1 autoregister_controller.go:144] Starting autoregister controller
	I1004 03:48:01.310846       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1004 03:48:01.310852       1 cache.go:39] Caches are synced for autoregister controller
	I1004 03:48:01.311001       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1004 03:48:01.356860       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1004 03:48:01.361711       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1004 03:48:01.367843       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:48:01.801601       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 03:48:04.076831       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 03:48:04.178833       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [791224c1c5beaa48a02dd37c47d161a52607518ce5271805ba2b539a363603d9] <==
	W1004 03:47:42.519035       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519117       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519180       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519237       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519295       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519348       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519406       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519460       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519518       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519576       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519627       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519684       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.519988       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.522580       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.522650       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.522719       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.522768       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.522806       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.522857       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.524679       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.524735       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.524773       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.524812       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.524850       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:47:42.524887       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [6e2a5d0b9ab3eb5a96949e350a2f0eec6b4d45fd54384d9fdf9fb6b033044085] <==
	I1004 03:48:03.861024       1 shared_informer.go:320] Caches are synced for taint
	I1004 03:48:03.861242       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1004 03:48:03.861729       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-261592"
	I1004 03:48:03.861898       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1004 03:48:03.877453       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1004 03:48:03.877562       1 shared_informer.go:320] Caches are synced for ephemeral
	I1004 03:48:03.877573       1 shared_informer.go:320] Caches are synced for deployment
	I1004 03:48:03.877582       1 shared_informer.go:320] Caches are synced for daemon sets
	I1004 03:48:03.877592       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1004 03:48:03.881942       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1004 03:48:03.887560       1 shared_informer.go:320] Caches are synced for GC
	I1004 03:48:03.918600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="60.624463ms"
	I1004 03:48:03.924472       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="64.589µs"
	I1004 03:48:03.923420       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1004 03:48:03.929766       1 shared_informer.go:320] Caches are synced for resource quota
	I1004 03:48:03.974905       1 shared_informer.go:320] Caches are synced for endpoint
	I1004 03:48:03.979369       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1004 03:48:03.995950       1 shared_informer.go:320] Caches are synced for resource quota
	I1004 03:48:04.401119       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 03:48:04.401275       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1004 03:48:04.410427       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 03:48:07.714757       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="36.966908ms"
	I1004 03:48:07.718287       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="74.385µs"
	I1004 03:48:07.789677       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="36.765034ms"
	I1004 03:48:07.789940       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="113.843µs"
	
	
	==> kube-controller-manager [d43427b82f7def3fbcbe40e387ba39f632412a2e72ed6e718610483c1cbff0ce] <==
	I1004 03:46:54.629541       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1004 03:46:54.632349       1 shared_informer.go:320] Caches are synced for resource quota
	I1004 03:46:54.676427       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1004 03:46:54.697679       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1004 03:46:54.700031       1 shared_informer.go:320] Caches are synced for resource quota
	I1004 03:46:54.769737       1 shared_informer.go:320] Caches are synced for attach detach
	I1004 03:46:55.164489       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 03:46:55.176015       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 03:46:55.176062       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1004 03:46:55.441657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-261592"
	I1004 03:46:55.714858       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="171.848392ms"
	I1004 03:46:55.728530       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="13.623638ms"
	I1004 03:46:55.747792       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="19.211047ms"
	I1004 03:46:55.747893       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.459µs"
	I1004 03:47:37.337472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-261592"
	I1004 03:47:37.348957       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-261592"
	I1004 03:47:37.357597       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="92.305µs"
	I1004 03:47:37.363535       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="50.6µs"
	I1004 03:47:37.376395       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="91.657µs"
	I1004 03:47:37.388007       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="103.751µs"
	I1004 03:47:38.759507       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.222µs"
	I1004 03:47:38.798210       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="20.274494ms"
	I1004 03:47:38.817984       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="19.658207ms"
	I1004 03:47:38.818270       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="57.057µs"
	I1004 03:47:39.482805       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c0e2e70ad4035836e7818c915e0db86d4485f6ff6afabe892db8d1e93822e1ea] <==
	I1004 03:46:56.454385       1 server_linux.go:66] "Using iptables proxy"
	I1004 03:46:56.548595       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.85.2"]
	E1004 03:46:56.548745       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:46:56.617529       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1004 03:46:56.617651       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:46:56.619524       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:46:56.619950       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:46:56.620124       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:46:56.621485       1 config.go:199] "Starting service config controller"
	I1004 03:46:56.621564       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:46:56.621622       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:46:56.621652       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:46:56.622221       1 config.go:328] "Starting node config controller"
	I1004 03:46:56.623855       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:46:56.723456       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:46:56.723510       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:46:56.725377       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e63018b158a7bd670279f3703aa2d851093804185d342910399a1086f42f07f5] <==
	I1004 03:47:55.848090       1 server_linux.go:66] "Using iptables proxy"
	I1004 03:48:01.388257       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.85.2"]
	E1004 03:48:01.388499       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:48:01.647585       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1004 03:48:01.648081       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:48:01.672560       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:48:01.673065       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:48:01.673341       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:48:01.674641       1 config.go:199] "Starting service config controller"
	I1004 03:48:01.674747       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:48:01.674833       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:48:01.674882       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:48:01.681112       1 config.go:328] "Starting node config controller"
	I1004 03:48:01.681267       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:48:01.775938       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:48:01.775988       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:48:01.783219       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0f52739bcc35ba86f784c063cd2903825f9bafc686012ce3c188937f35f5bb1d] <==
	E1004 03:46:47.874826       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:47.874865       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 03:46:47.874910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:47.874929       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 03:46:47.875005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:47.874982       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:46:47.875080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:47.875103       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:46:47.875157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:47.875161       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:46:47.875244       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:47.874884       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 03:46:47.875320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:47.875040       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 03:46:47.877385       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:48.739360       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 03:46:48.739477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:48.958253       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:46:48.958370       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:48.959524       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 03:46:48.959614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:46:49.009532       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 03:46:49.009582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1004 03:46:49.557603       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1004 03:47:42.458756       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fb198c02c4ed24f961fb6303226c64528c187c0f217f6e982bc160b51f2db2e1] <==
	I1004 03:47:56.948401       1 serving.go:386] Generated self-signed cert in-memory
	W1004 03:48:01.143488       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1004 03:48:01.143596       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 03:48:01.143633       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1004 03:48:01.143666       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 03:48:01.304546       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1004 03:48:01.304635       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:48:01.316170       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1004 03:48:01.319387       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1004 03:48:01.319426       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 03:48:01.319458       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1004 03:48:01.420598       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.813420    1523 status_manager.go:851] "Failed to get status for pod" podUID="cf0f8906452862a19e15cc02d1dc003a" pod="kube-system/etcd-pause-261592" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-261592\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.813565    1523 status_manager.go:851] "Failed to get status for pod" podUID="cd9b2bf70ea7f34f08b9f659d966a9c0" pod="kube-system/kube-scheduler-pause-261592" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-261592\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.814753    1523 status_manager.go:851] "Failed to get status for pod" podUID="84cd1b956167e16e9e2a1ed0b5d101ce" pod="kube-system/kube-controller-manager-pause-261592" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-261592\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.820634    1523 status_manager.go:851] "Failed to get status for pod" podUID="e49267eabe20acbbe7e6af0123b5c4f9" pod="kube-system/kube-apiserver-pause-261592" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-261592\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.821031    1523 status_manager.go:851] "Failed to get status for pod" podUID="a063f599-caec-4865-9852-66e0765f7359" pod="kube-system/kindnet-srv54" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-srv54\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.821470    1523 status_manager.go:851] "Failed to get status for pod" podUID="7c42b79f-7f6b-4035-a550-f5c278021ea2" pod="kube-system/kube-proxy-k84f2" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k84f2\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.821755    1523 status_manager.go:851] "Failed to get status for pod" podUID="3c8258a1-0c38-4c03-8d36-ee9b2606feb9" pod="kube-system/coredns-7c65d6cfc9-9n4vl" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9n4vl\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.822022    1523 status_manager.go:851] "Failed to get status for pod" podUID="9c0b7172-82ef-42e6-bf7e-126917a5f027" pod="kube-system/coredns-7c65d6cfc9-42rv6" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-42rv6\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.822272    1523 status_manager.go:851] "Failed to get status for pod" podUID="cf0f8906452862a19e15cc02d1dc003a" pod="kube-system/etcd-pause-261592" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-261592\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.822526    1523 status_manager.go:851] "Failed to get status for pod" podUID="cd9b2bf70ea7f34f08b9f659d966a9c0" pod="kube-system/kube-scheduler-pause-261592" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-261592\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.822823    1523 status_manager.go:851] "Failed to get status for pod" podUID="cf0f8906452862a19e15cc02d1dc003a" pod="kube-system/etcd-pause-261592" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-261592\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.823103    1523 status_manager.go:851] "Failed to get status for pod" podUID="cd9b2bf70ea7f34f08b9f659d966a9c0" pod="kube-system/kube-scheduler-pause-261592" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-261592\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.823358    1523 status_manager.go:851] "Failed to get status for pod" podUID="84cd1b956167e16e9e2a1ed0b5d101ce" pod="kube-system/kube-controller-manager-pause-261592" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-261592\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.823638    1523 status_manager.go:851] "Failed to get status for pod" podUID="e49267eabe20acbbe7e6af0123b5c4f9" pod="kube-system/kube-apiserver-pause-261592" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-261592\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.823912    1523 status_manager.go:851] "Failed to get status for pod" podUID="a063f599-caec-4865-9852-66e0765f7359" pod="kube-system/kindnet-srv54" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-srv54\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.824187    1523 status_manager.go:851] "Failed to get status for pod" podUID="7c42b79f-7f6b-4035-a550-f5c278021ea2" pod="kube-system/kube-proxy-k84f2" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k84f2\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.824464    1523 status_manager.go:851] "Failed to get status for pod" podUID="3c8258a1-0c38-4c03-8d36-ee9b2606feb9" pod="kube-system/coredns-7c65d6cfc9-9n4vl" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9n4vl\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:47:51 pause-261592 kubelet[1523]: I1004 03:47:51.824746    1523 status_manager.go:851] "Failed to get status for pod" podUID="9c0b7172-82ef-42e6-bf7e-126917a5f027" pod="kube-system/coredns-7c65d6cfc9-42rv6" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-42rv6\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Oct 04 03:48:00 pause-261592 kubelet[1523]: E1004 03:48:00.666253    1523 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728013680666027306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125700,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:48:00 pause-261592 kubelet[1523]: E1004 03:48:00.666294    1523 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728013680666027306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125700,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:48:00 pause-261592 kubelet[1523]: E1004 03:48:00.918797    1523 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	Oct 04 03:48:00 pause-261592 kubelet[1523]: E1004 03:48:00.919525    1523 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	Oct 04 03:48:00 pause-261592 kubelet[1523]: E1004 03:48:00.919688    1523 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	Oct 04 03:48:10 pause-261592 kubelet[1523]: E1004 03:48:10.670847    1523 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728013690670259443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125700,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:48:10 pause-261592 kubelet[1523]: E1004 03:48:10.670889    1523 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728013690670259443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125700,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-261592 -n pause-261592
helpers_test.go:261: (dbg) Run:  kubectl --context pause-261592 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (32.48s)

                                                
                                    

Test pass (289/323)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.76
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.1/json-events 5.28
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.19
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.51
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 203.75
31 TestAddons/serial/GCPAuth/Namespaces 0.19
34 TestAddons/parallel/Registry 18.16
36 TestAddons/parallel/InspektorGadget 11.82
37 TestAddons/parallel/Logviewer 6.59
40 TestAddons/parallel/CSI 52.68
41 TestAddons/parallel/Headlamp 17.74
42 TestAddons/parallel/CloudSpanner 6.54
43 TestAddons/parallel/LocalPath 53.95
44 TestAddons/parallel/NvidiaDevicePlugin 6.52
45 TestAddons/parallel/Yakd 11.88
46 TestAddons/StoppedEnableDisable 12.18
47 TestCertOptions 34.24
48 TestCertExpiration 239.38
50 TestForceSystemdFlag 34.78
51 TestForceSystemdEnv 48.07
57 TestErrorSpam/setup 29.75
58 TestErrorSpam/start 0.73
59 TestErrorSpam/status 1.02
60 TestErrorSpam/pause 1.7
61 TestErrorSpam/unpause 1.76
62 TestErrorSpam/stop 1.49
65 TestFunctional/serial/CopySyncFile 0
66 TestFunctional/serial/StartWithProxy 76.61
67 TestFunctional/serial/AuditLog 0
68 TestFunctional/serial/SoftStart 23.86
69 TestFunctional/serial/KubeContext 0.06
70 TestFunctional/serial/KubectlGetPods 0.1
73 TestFunctional/serial/CacheCmd/cache/add_remote 4.37
74 TestFunctional/serial/CacheCmd/cache/add_local 1.43
75 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
76 TestFunctional/serial/CacheCmd/cache/list 0.05
77 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
78 TestFunctional/serial/CacheCmd/cache/cache_reload 1.9
79 TestFunctional/serial/CacheCmd/cache/delete 0.11
80 TestFunctional/serial/MinikubeKubectlCmd 0.14
81 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
82 TestFunctional/serial/ExtraConfig 31.57
83 TestFunctional/serial/ComponentHealth 0.09
84 TestFunctional/serial/LogsCmd 1.65
85 TestFunctional/serial/LogsFileCmd 1.71
86 TestFunctional/serial/InvalidService 4.67
88 TestFunctional/parallel/ConfigCmd 0.46
89 TestFunctional/parallel/DashboardCmd 11.09
90 TestFunctional/parallel/DryRun 0.47
91 TestFunctional/parallel/InternationalLanguage 0.18
92 TestFunctional/parallel/StatusCmd 1.19
96 TestFunctional/parallel/ServiceCmdConnect 11.71
97 TestFunctional/parallel/AddonsCmd 0.19
98 TestFunctional/parallel/PersistentVolumeClaim 26.21
100 TestFunctional/parallel/SSHCmd 0.64
101 TestFunctional/parallel/CpCmd 2.25
103 TestFunctional/parallel/FileSync 0.34
104 TestFunctional/parallel/CertSync 2.08
108 TestFunctional/parallel/NodeLabels 0.11
110 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
112 TestFunctional/parallel/License 0.23
114 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.5
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
124 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.53
126 TestFunctional/parallel/ServiceCmd/List 0.59
127 TestFunctional/parallel/ProfileCmd/profile_list 0.45
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.56
131 TestFunctional/parallel/MountCmd/any-port 9.58
132 TestFunctional/parallel/ServiceCmd/Format 0.43
133 TestFunctional/parallel/ServiceCmd/URL 0.42
134 TestFunctional/parallel/MountCmd/specific-port 2.2
135 TestFunctional/parallel/MountCmd/VerifyCleanup 2.12
136 TestFunctional/parallel/Version/short 0.06
137 TestFunctional/parallel/Version/components 0.93
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
142 TestFunctional/parallel/ImageCommands/ImageBuild 3.73
143 TestFunctional/parallel/ImageCommands/Setup 0.69
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.71
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.11
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.26
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 172.15
161 TestMultiControlPlane/serial/DeployApp 7.9
162 TestMultiControlPlane/serial/PingHostFromPods 1.56
163 TestMultiControlPlane/serial/AddWorkerNode 63.42
164 TestMultiControlPlane/serial/NodeLabels 0.1
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 1
166 TestMultiControlPlane/serial/CopyFile 18
167 TestMultiControlPlane/serial/StopSecondaryNode 12.69
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.77
169 TestMultiControlPlane/serial/RestartSecondaryNode 33.25
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.31
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 214.73
172 TestMultiControlPlane/serial/DeleteSecondaryNode 12.89
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.72
174 TestMultiControlPlane/serial/StopCluster 35.78
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.72
177 TestMultiControlPlane/serial/AddSecondaryNode 71.34
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1
182 TestJSONOutput/start/Command 48.51
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.73
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.65
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 5.85
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.23
207 TestKicCustomNetwork/create_custom_network 39.49
208 TestKicCustomNetwork/use_default_bridge_network 36.08
209 TestKicExistingNetwork 31.04
210 TestKicCustomSubnet 31.72
211 TestKicStaticIP 32.69
212 TestMainNoArgs 0.05
213 TestMinikubeProfile 68.18
216 TestMountStart/serial/StartWithMountFirst 9.35
217 TestMountStart/serial/VerifyMountFirst 0.25
218 TestMountStart/serial/StartWithMountSecond 6.7
219 TestMountStart/serial/VerifyMountSecond 0.25
220 TestMountStart/serial/DeleteFirst 1.65
221 TestMountStart/serial/VerifyMountPostDelete 0.26
222 TestMountStart/serial/Stop 1.21
223 TestMountStart/serial/RestartStopped 7.79
224 TestMountStart/serial/VerifyMountPostStop 0.26
227 TestMultiNode/serial/FreshStart2Nodes 106.65
228 TestMultiNode/serial/DeployApp2Nodes 6.32
229 TestMultiNode/serial/PingHostFrom2Pods 0.97
230 TestMultiNode/serial/AddNode 28.72
231 TestMultiNode/serial/MultiNodeLabels 0.09
232 TestMultiNode/serial/ProfileList 0.66
233 TestMultiNode/serial/CopyFile 9.66
234 TestMultiNode/serial/StopNode 2.24
235 TestMultiNode/serial/StartAfterStop 10.29
236 TestMultiNode/serial/RestartKeepsNodes 99.56
237 TestMultiNode/serial/DeleteNode 5.43
238 TestMultiNode/serial/StopMultiNode 23.88
239 TestMultiNode/serial/RestartMultiNode 54.42
240 TestMultiNode/serial/ValidateNameConflict 39.57
245 TestPreload 134.21
247 TestScheduledStopUnix 106.72
250 TestInsufficientStorage 10.18
251 TestRunningBinaryUpgrade 81.18
253 TestKubernetesUpgrade 393.16
254 TestMissingContainerUpgrade 172.17
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
257 TestNoKubernetes/serial/StartWithK8s 36.53
258 TestNoKubernetes/serial/StartWithStopK8s 20.19
259 TestNoKubernetes/serial/Start 6.01
260 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
261 TestNoKubernetes/serial/ProfileList 1.2
262 TestNoKubernetes/serial/Stop 1.28
263 TestNoKubernetes/serial/StartNoArgs 7.31
264 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
265 TestStoppedBinaryUpgrade/Setup 0.61
266 TestStoppedBinaryUpgrade/Upgrade 65.4
267 TestStoppedBinaryUpgrade/MinikubeLogs 1.16
276 TestPause/serial/Start 82.68
285 TestNetworkPlugins/group/false 4.56
290 TestStartStop/group/old-k8s-version/serial/FirstStart 154.27
291 TestStartStop/group/old-k8s-version/serial/DeployApp 10.75
292 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.66
294 TestStartStop/group/no-preload/serial/FirstStart 65.2
295 TestStartStop/group/old-k8s-version/serial/Stop 13.85
296 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
297 TestStartStop/group/old-k8s-version/serial/SecondStart 379.44
298 TestStartStop/group/no-preload/serial/DeployApp 10.41
299 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
300 TestStartStop/group/no-preload/serial/Stop 11.97
301 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
302 TestStartStop/group/no-preload/serial/SecondStart 265.32
303 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
304 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
305 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
306 TestStartStop/group/no-preload/serial/Pause 3.06
308 TestStartStop/group/embed-certs/serial/FirstStart 78.17
309 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
310 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
312 TestStartStop/group/old-k8s-version/serial/Pause 3.11
314 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53.44
315 TestStartStop/group/embed-certs/serial/DeployApp 9.47
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
317 TestStartStop/group/embed-certs/serial/Stop 11.96
318 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.35
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
320 TestStartStop/group/embed-certs/serial/SecondStart 275.28
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.27
322 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.13
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
324 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.17
325 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.11
327 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
328 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
329 TestStartStop/group/embed-certs/serial/Pause 2.97
330 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.17
332 TestStartStop/group/newest-cni/serial/FirstStart 43.47
333 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.35
334 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.34
335 TestNetworkPlugins/group/auto/Start 87.26
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.89
338 TestStartStop/group/newest-cni/serial/Stop 1.31
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
340 TestStartStop/group/newest-cni/serial/SecondStart 17
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
344 TestStartStop/group/newest-cni/serial/Pause 3.01
345 TestNetworkPlugins/group/kindnet/Start 76.15
346 TestNetworkPlugins/group/auto/KubeletFlags 0.29
347 TestNetworkPlugins/group/auto/NetCatPod 10.31
348 TestNetworkPlugins/group/auto/DNS 0.21
349 TestNetworkPlugins/group/auto/Localhost 0.15
350 TestNetworkPlugins/group/auto/HairPin 0.15
351 TestNetworkPlugins/group/calico/Start 64.88
352 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
353 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
354 TestNetworkPlugins/group/kindnet/NetCatPod 12.38
355 TestNetworkPlugins/group/kindnet/DNS 0.27
356 TestNetworkPlugins/group/kindnet/Localhost 0.21
357 TestNetworkPlugins/group/kindnet/HairPin 0.19
358 TestNetworkPlugins/group/custom-flannel/Start 63.86
359 TestNetworkPlugins/group/calico/ControllerPod 6.01
360 TestNetworkPlugins/group/calico/KubeletFlags 0.39
361 TestNetworkPlugins/group/calico/NetCatPod 12.32
362 TestNetworkPlugins/group/calico/DNS 0.29
363 TestNetworkPlugins/group/calico/Localhost 0.25
364 TestNetworkPlugins/group/calico/HairPin 0.2
365 TestNetworkPlugins/group/enable-default-cni/Start 74.03
366 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
367 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.34
368 TestNetworkPlugins/group/custom-flannel/DNS 0.26
369 TestNetworkPlugins/group/custom-flannel/Localhost 0.26
370 TestNetworkPlugins/group/custom-flannel/HairPin 0.32
371 TestNetworkPlugins/group/flannel/Start 58.64
372 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
373 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.41
374 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
375 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
376 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
377 TestNetworkPlugins/group/bridge/Start 76.61
378 TestNetworkPlugins/group/flannel/ControllerPod 6.01
379 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
380 TestNetworkPlugins/group/flannel/NetCatPod 13.29
381 TestNetworkPlugins/group/flannel/DNS 0.19
382 TestNetworkPlugins/group/flannel/Localhost 0.22
383 TestNetworkPlugins/group/flannel/HairPin 0.27
384 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
385 TestNetworkPlugins/group/bridge/NetCatPod 11.26
386 TestNetworkPlugins/group/bridge/DNS 0.16
387 TestNetworkPlugins/group/bridge/Localhost 0.15
388 TestNetworkPlugins/group/bridge/HairPin 0.2
x
+
TestDownloadOnly/v1.20.0/json-events (6.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-843597 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-843597 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.754710713s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1004 02:47:54.723468    7560 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1004 02:47:54.723545    7560 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-843597
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-843597: exit status 85 (59.738403ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-843597 | jenkins | v1.34.0 | 04 Oct 24 02:47 UTC |          |
	|         | -p download-only-843597        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 02:47:48
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 02:47:48.006870    7565 out.go:345] Setting OutFile to fd 1 ...
	I1004 02:47:48.007045    7565 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:47:48.007073    7565 out.go:358] Setting ErrFile to fd 2...
	I1004 02:47:48.007097    7565 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:47:48.007340    7565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-2238/.minikube/bin
	W1004 02:47:48.007492    7565 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19546-2238/.minikube/config/config.json: open /home/jenkins/minikube-integration/19546-2238/.minikube/config/config.json: no such file or directory
	I1004 02:47:48.007945    7565 out.go:352] Setting JSON to true
	I1004 02:47:48.008737    7565 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1813,"bootTime":1728008255,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1004 02:47:48.008839    7565 start.go:139] virtualization:  
	I1004 02:47:48.012571    7565 out.go:97] [download-only-843597] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1004 02:47:48.012762    7565 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball: no such file or directory
	I1004 02:47:48.012814    7565 notify.go:220] Checking for updates...
	I1004 02:47:48.015324    7565 out.go:169] MINIKUBE_LOCATION=19546
	I1004 02:47:48.018215    7565 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 02:47:48.020985    7565 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19546-2238/kubeconfig
	I1004 02:47:48.024038    7565 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-2238/.minikube
	I1004 02:47:48.026853    7565 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1004 02:47:48.032080    7565 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1004 02:47:48.032405    7565 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 02:47:48.055486    7565 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1004 02:47:48.055596    7565 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 02:47:48.387727    7565 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-04 02:47:48.378116646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 02:47:48.387832    7565 docker.go:318] overlay module found
	I1004 02:47:48.390615    7565 out.go:97] Using the docker driver based on user configuration
	I1004 02:47:48.390650    7565 start.go:297] selected driver: docker
	I1004 02:47:48.390658    7565 start.go:901] validating driver "docker" against <nil>
	I1004 02:47:48.390755    7565 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 02:47:48.442919    7565 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-04 02:47:48.43423111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 02:47:48.443121    7565 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 02:47:48.443423    7565 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1004 02:47:48.443588    7565 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1004 02:47:48.446568    7565 out.go:169] Using Docker driver with root privileges
	I1004 02:47:48.449111    7565 cni.go:84] Creating CNI manager for ""
	I1004 02:47:48.449174    7565 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1004 02:47:48.449186    7565 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1004 02:47:48.449274    7565 start.go:340] cluster config:
	{Name:download-only-843597 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-843597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 02:47:48.452028    7565 out.go:97] Starting "download-only-843597" primary control-plane node in "download-only-843597" cluster
	I1004 02:47:48.452053    7565 cache.go:121] Beginning downloading kic base image for docker with crio
	I1004 02:47:48.454768    7565 out.go:97] Pulling base image v0.0.45-1727731891-master ...
	I1004 02:47:48.454796    7565 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1004 02:47:48.454934    7565 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1004 02:47:48.469389    7565 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1004 02:47:48.469581    7565 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1004 02:47:48.469682    7565 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1004 02:47:48.507759    7565 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1004 02:47:48.507802    7565 cache.go:56] Caching tarball of preloaded images
	I1004 02:47:48.507941    7565 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1004 02:47:48.510855    7565 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1004 02:47:48.510889    7565 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1004 02:47:48.601655    7565 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1004 02:47:52.993836    7565 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1004 02:47:52.993935    7565 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-843597 host does not exist
	  To start a cluster, run: "minikube start -p download-only-843597"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-843597
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-010684 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-010684 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.284302648s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1004 02:48:00.404631    7560 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1004 02:48:00.404669    7560 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-010684
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-010684: exit status 85 (58.953259ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-843597 | jenkins | v1.34.0 | 04 Oct 24 02:47 UTC |                     |
	|         | -p download-only-843597        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 04 Oct 24 02:47 UTC | 04 Oct 24 02:47 UTC |
	| delete  | -p download-only-843597        | download-only-843597 | jenkins | v1.34.0 | 04 Oct 24 02:47 UTC | 04 Oct 24 02:47 UTC |
	| start   | -o=json --download-only        | download-only-010684 | jenkins | v1.34.0 | 04 Oct 24 02:47 UTC |                     |
	|         | -p download-only-010684        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 02:47:55
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 02:47:55.160239    7766 out.go:345] Setting OutFile to fd 1 ...
	I1004 02:47:55.160410    7766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:47:55.160438    7766 out.go:358] Setting ErrFile to fd 2...
	I1004 02:47:55.160461    7766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:47:55.160708    7766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-2238/.minikube/bin
	I1004 02:47:55.161133    7766 out.go:352] Setting JSON to true
	I1004 02:47:55.161945    7766 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1821,"bootTime":1728008255,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1004 02:47:55.162047    7766 start.go:139] virtualization:  
	I1004 02:47:55.164354    7766 out.go:97] [download-only-010684] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1004 02:47:55.164588    7766 notify.go:220] Checking for updates...
	I1004 02:47:55.166105    7766 out.go:169] MINIKUBE_LOCATION=19546
	I1004 02:47:55.167443    7766 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 02:47:55.168991    7766 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19546-2238/kubeconfig
	I1004 02:47:55.170684    7766 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-2238/.minikube
	I1004 02:47:55.172122    7766 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1004 02:47:55.174778    7766 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1004 02:47:55.175048    7766 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 02:47:55.197719    7766 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1004 02:47:55.197845    7766 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 02:47:55.266010    7766 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-04 02:47:55.256079095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 02:47:55.266125    7766 docker.go:318] overlay module found
	I1004 02:47:55.269023    7766 out.go:97] Using the docker driver based on user configuration
	I1004 02:47:55.269056    7766 start.go:297] selected driver: docker
	I1004 02:47:55.269062    7766 start.go:901] validating driver "docker" against <nil>
	I1004 02:47:55.269163    7766 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 02:47:55.321405    7766 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-04 02:47:55.312129408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 02:47:55.321624    7766 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 02:47:55.321904    7766 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1004 02:47:55.322065    7766 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1004 02:47:55.323732    7766 out.go:169] Using Docker driver with root privileges
	I1004 02:47:55.324812    7766 cni.go:84] Creating CNI manager for ""
	I1004 02:47:55.324878    7766 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1004 02:47:55.324893    7766 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1004 02:47:55.324972    7766 start.go:340] cluster config:
	{Name:download-only-010684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-010684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 02:47:55.326527    7766 out.go:97] Starting "download-only-010684" primary control-plane node in "download-only-010684" cluster
	I1004 02:47:55.326551    7766 cache.go:121] Beginning downloading kic base image for docker with crio
	I1004 02:47:55.328219    7766 out.go:97] Pulling base image v0.0.45-1727731891-master ...
	I1004 02:47:55.328244    7766 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 02:47:55.328418    7766 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1004 02:47:55.343846    7766 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1004 02:47:55.343996    7766 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1004 02:47:55.344022    7766 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1004 02:47:55.344032    7766 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1004 02:47:55.344039    7766 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1004 02:47:55.385646    7766 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1004 02:47:55.385678    7766 cache.go:56] Caching tarball of preloaded images
	I1004 02:47:55.385855    7766 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 02:47:55.387598    7766 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1004 02:47:55.387626    7766 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I1004 02:47:55.473852    7766 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:8285fc512c7462f100de137f91fcd0ae -> /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1004 02:47:58.888832    7766 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I1004 02:47:58.888935    7766 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19546-2238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I1004 02:47:59.740112    7766 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 02:47:59.740498    7766 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/download-only-010684/config.json ...
	I1004 02:47:59.740531    7766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/download-only-010684/config.json: {Name:mk5af4cb60b04e2dddc6e6bda3b22bca5e0e8ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:47:59.740705    7766 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 02:47:59.740870    7766 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19546-2238/.minikube/cache/linux/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-010684 host does not exist
	  To start a cluster, run: "minikube start -p download-only-010684"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-010684
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.51s)

                                                
                                                
=== RUN   TestBinaryMirror
I1004 02:48:01.547938    7560 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-541238 --alsologtostderr --binary-mirror http://127.0.0.1:36901 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-541238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-541238
--- PASS: TestBinaryMirror (0.51s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:945: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-561541
addons_test.go:945: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-561541: exit status 85 (60.234501ms)

                                                
                                                
-- stdout --
	* Profile "addons-561541" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-561541"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:956: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-561541
addons_test.go:956: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-561541: exit status 85 (64.745254ms)

                                                
                                                
-- stdout --
	* Profile "addons-561541" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-561541"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (203.75s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-561541 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=logviewer --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-561541 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=logviewer --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m23.750899286s)
--- PASS: TestAddons/Setup (203.75s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:570: (dbg) Run:  kubectl --context addons-561541 create ns new-namespace
addons_test.go:584: (dbg) Run:  kubectl --context addons-561541 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:322: registry stabilized in 11.626577ms
addons_test.go:324: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
I1004 02:59:38.375703    7560 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1004 02:59:38.375727    7560 kapi.go:107] duration metric: took 12.338203ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "registry-66c9cd494c-lc5j7" [d1434ec1-9246-4eec-97cd-0ae38734e96e] Running
addons_test.go:324: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003358321s
addons_test.go:327: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2kl22" [ee49d77e-84c1-4b75-b458-f901291a1eb8] Running
addons_test.go:327: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004892732s
addons_test.go:332: (dbg) Run:  kubectl --context addons-561541 delete po -l run=registry-test --now
addons_test.go:337: (dbg) Run:  kubectl --context addons-561541 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:337: (dbg) Done: kubectl --context addons-561541 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.848100443s)
addons_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p addons-561541 ip
2024/10/04 02:59:55 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-561541 addons disable registry --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-arm64 -p addons-561541 addons disable registry --alsologtostderr -v=1: (1.046361784s)
--- PASS: TestAddons/parallel/Registry (18.16s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.82s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:759: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-g6ktr" [5dcd7a4b-0e05-4e00-a6a9-2d72453c9b35] Running
addons_test.go:759: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003746641s
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-561541 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-arm64 -p addons-561541 addons disable inspektor-gadget --alsologtostderr -v=1: (5.811515037s)
--- PASS: TestAddons/parallel/InspektorGadget (11.82s)

                                                
                                    
x
+
TestAddons/parallel/Logviewer (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Logviewer
=== PAUSE TestAddons/parallel/Logviewer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Logviewer
addons_test.go:769: (dbg) TestAddons/parallel/Logviewer: waiting 8m0s for pods matching "app=logviewer" in namespace "kube-system" ...
helpers_test.go:344: "logviewer-7c79c8bcc9-2b554" [75a7f403-12b6-4f98-b0af-8bf7c3aa0ab1] Running
addons_test.go:769: (dbg) TestAddons/parallel/Logviewer: app=logviewer healthy within 6.004289846s
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-561541 addons disable logviewer --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Logviewer (6.59s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1004 02:59:38.363402    7560 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:489: csi-hostpath-driver pods stabilized in 12.34759ms
addons_test.go:492: (dbg) Run:  kubectl --context addons-561541 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:497: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:502: (dbg) Run:  kubectl --context addons-561541 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:507: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [795ece74-de6a-4686-942d-1fdd25bbe0d8] Pending
helpers_test.go:344: "task-pv-pod" [795ece74-de6a-4686-942d-1fdd25bbe0d8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [795ece74-de6a-4686-942d-1fdd25bbe0d8] Running
addons_test.go:507: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003818859s
addons_test.go:512: (dbg) Run:  kubectl --context addons-561541 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:517: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-561541 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-561541 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:522: (dbg) Run:  kubectl --context addons-561541 delete pod task-pv-pod
addons_test.go:528: (dbg) Run:  kubectl --context addons-561541 delete pvc hpvc
addons_test.go:534: (dbg) Run:  kubectl --context addons-561541 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-561541 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:549: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [88e787ab-e93c-4557-8047-7f4e0f280778] Pending
helpers_test.go:344: "task-pv-pod-restore" [88e787ab-e93c-4557-8047-7f4e0f280778] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [88e787ab-e93c-4557-8047-7f4e0f280778] Running
addons_test.go:549: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004534505s
addons_test.go:554: (dbg) Run:  kubectl --context addons-561541 delete pod task-pv-pod-restore
addons_test.go:554: (dbg) Done: kubectl --context addons-561541 delete pod task-pv-pod-restore: (1.425677419s)
addons_test.go:558: (dbg) Run:  kubectl --context addons-561541 delete pvc hpvc-restore
addons_test.go:562: (dbg) Run:  kubectl --context addons-561541 delete volumesnapshot new-snapshot-demo
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-561541 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-arm64 -p addons-561541 addons disable volumesnapshots --alsologtostderr -v=1: (1.006288336s)
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-561541 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-arm64 -p addons-561541 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.011155213s)
--- PASS: TestAddons/parallel/CSI (52.68s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:744: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-561541 --alsologtostderr -v=1
addons_test.go:749: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-cwftv" [f6d0e37c-ee04-4b9c-abab-f73437534e7f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-cwftv" [f6d0e37c-ee04-4b9c-abab-f73437534e7f] Running
addons_test.go:749: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003605342s
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-561541 addons disable headlamp --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-arm64 -p addons-561541 addons disable headlamp --alsologtostderr -v=1: (5.775046861s)
--- PASS: TestAddons/parallel/Headlamp (17.74s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:786: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-ht9lq" [613cdc13-6fc6-4fc8-ba35-8a236672f996] Running
addons_test.go:786: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003669051s
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-561541 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.95s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:894: (dbg) Run:  kubectl --context addons-561541 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:900: (dbg) Run:  kubectl --context addons-561541 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:904: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-561541 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:907: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8a0f59c0-6303-441d-9b79-22d841e4e0be] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8a0f59c0-6303-441d-9b79-22d841e4e0be] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8a0f59c0-6303-441d-9b79-22d841e4e0be] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:907: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004753793s
addons_test.go:912: (dbg) Run:  kubectl --context addons-561541 get pvc test-pvc -o=json
addons_test.go:921: (dbg) Run:  out/minikube-linux-arm64 -p addons-561541 ssh "cat /opt/local-path-provisioner/pvc-7e10a70c-e181-4d72-a74e-5076f85972f6_default_test-pvc/file1"
addons_test.go:933: (dbg) Run:  kubectl --context addons-561541 delete pod test-local-path
addons_test.go:937: (dbg) Run:  kubectl --context addons-561541 delete pvc test-pvc
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-561541 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-arm64 -p addons-561541 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.458018142s)
--- PASS: TestAddons/parallel/LocalPath (53.95s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:969: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5nsmh" [417c82a7-a3be-4373-b14a-9d52e4aaa1d2] Running
addons_test.go:969: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004054355s
addons_test.go:972: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-561541
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:980: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-n6q2k" [bc0efa78-a719-45d1-9e4b-d521e5aee495] Running
addons_test.go:980: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003706787s
addons_test.go:984: (dbg) Run:  out/minikube-linux-arm64 -p addons-561541 addons disable yakd --alsologtostderr -v=1
addons_test.go:984: (dbg) Done: out/minikube-linux-arm64 -p addons-561541 addons disable yakd --alsologtostderr -v=1: (5.873418049s)
--- PASS: TestAddons/parallel/Yakd (11.88s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-561541
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-561541: (11.92218279s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-561541
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-561541
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-561541
--- PASS: TestAddons/StoppedEnableDisable (12.18s)

                                                
                                    
x
+
TestCertOptions (34.24s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-271702 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-271702 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (31.586263913s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-271702 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-271702 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-271702 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-271702" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-271702
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-271702: (1.999411093s)
--- PASS: TestCertOptions (34.24s)

                                                
                                    
x
+
TestCertExpiration (239.38s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-085581 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-085581 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.727435861s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-085581 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-085581 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (17.109750892s)
helpers_test.go:175: Cleaning up "cert-expiration-085581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-085581
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-085581: (2.540729252s)
--- PASS: TestCertExpiration (239.38s)

                                                
                                    
x
+
TestForceSystemdFlag (34.78s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-922138 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1004 03:48:39.574968    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-922138 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.765327953s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-922138 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-922138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-922138
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-922138: (2.638639859s)
--- PASS: TestForceSystemdFlag (34.78s)

                                                
                                    
x
+
TestForceSystemdEnv (48.07s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-331379 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-331379 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (45.466579392s)
helpers_test.go:175: Cleaning up "force-systemd-env-331379" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-331379
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-331379: (2.603128346s)
--- PASS: TestForceSystemdEnv (48.07s)

                                                
                                    
x
+
TestErrorSpam/setup (29.75s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-905425 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-905425 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-905425 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-905425 --driver=docker  --container-runtime=crio: (29.750862982s)
--- PASS: TestErrorSpam/setup (29.75s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-905425 --log_dir /tmp/nospam-905425 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-905425 --log_dir /tmp/nospam-905425 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-905425 --log_dir /tmp/nospam-905425 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (1.02s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-905425 --log_dir /tmp/nospam-905425 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-905425 --log_dir /tmp/nospam-905425 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-905425 --log_dir /tmp/nospam-905425 status
--- PASS: TestErrorSpam/status (1.02s)

                                                
                                    
x
+
TestErrorSpam/pause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-905425 --log_dir /tmp/nospam-905425 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-905425 --log_dir /tmp/nospam-905425 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-905425 --log_dir /tmp/nospam-905425 pause
--- PASS: TestErrorSpam/pause (1.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-905425 --log_dir /tmp/nospam-905425 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-905425 --log_dir /tmp/nospam-905425 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-905425 --log_dir /tmp/nospam-905425 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-905425 --log_dir /tmp/nospam-905425 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-905425 --log_dir /tmp/nospam-905425 stop: (1.293218742s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-905425 --log_dir /tmp/nospam-905425 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-905425 --log_dir /tmp/nospam-905425 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19546-2238/.minikube/files/etc/test/nested/copy/7560/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.61s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-154453 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-154453 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m16.609062988s)
--- PASS: TestFunctional/serial/StartWithProxy (76.61s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (23.86s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1004 03:09:22.631438    7560 config.go:182] Loaded profile config "functional-154453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-154453 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-154453 --alsologtostderr -v=8: (23.859272197s)
functional_test.go:663: soft start took 23.86051068s for "functional-154453" cluster.
I1004 03:09:46.491020    7560 config.go:182] Loaded profile config "functional-154453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (23.86s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-154453 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-154453 cache add registry.k8s.io/pause:3.1: (1.51953493s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-154453 cache add registry.k8s.io/pause:3.3: (1.561805653s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-154453 cache add registry.k8s.io/pause:latest: (1.288318736s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-154453 /tmp/TestFunctionalserialCacheCmdcacheadd_local3605386872/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 cache add minikube-local-cache-test:functional-154453
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 cache delete minikube-local-cache-test:functional-154453
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-154453
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-154453 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (279.598143ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-154453 cache reload: (1.025912024s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 kubectl -- --context functional-154453 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-154453 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.57s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-154453 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-154453 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.569744121s)
functional_test.go:761: restart took 31.569860664s for "functional-154453" cluster.
I1004 03:10:26.691122    7560 config.go:182] Loaded profile config "functional-154453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (31.57s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-154453 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-154453 logs: (1.648529227s)
--- PASS: TestFunctional/serial/LogsCmd (1.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 logs --file /tmp/TestFunctionalserialLogsFileCmd3679278267/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-154453 logs --file /tmp/TestFunctionalserialLogsFileCmd3679278267/001/logs.txt: (1.712024772s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.67s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-154453 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-154453
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-154453: exit status 115 (656.521301ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30451 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-154453 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.67s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-154453 config get cpus: exit status 14 (91.396835ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-154453 config get cpus: exit status 14 (57.270496ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-154453 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-154453 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 40758: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-154453 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-154453 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (204.871871ms)

                                                
                                                
-- stdout --
	* [functional-154453] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-2238/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-2238/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:11:07.366271   40503 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:11:07.366444   40503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:11:07.366456   40503 out.go:358] Setting ErrFile to fd 2...
	I1004 03:11:07.366462   40503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:11:07.366846   40503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-2238/.minikube/bin
	I1004 03:11:07.368268   40503 out.go:352] Setting JSON to false
	I1004 03:11:07.369374   40503 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3213,"bootTime":1728008255,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1004 03:11:07.369463   40503 start.go:139] virtualization:  
	I1004 03:11:07.374525   40503 out.go:177] * [functional-154453] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1004 03:11:07.377592   40503 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:11:07.377628   40503 notify.go:220] Checking for updates...
	I1004 03:11:07.386588   40503 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:11:07.390012   40503 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-2238/kubeconfig
	I1004 03:11:07.393693   40503 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-2238/.minikube
	I1004 03:11:07.396894   40503 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1004 03:11:07.399695   40503 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:11:07.402849   40503 config.go:182] Loaded profile config "functional-154453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:11:07.403431   40503 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:11:07.444905   40503 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1004 03:11:07.445162   40503 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:11:07.503648   40503 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-04 03:11:07.492925911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:11:07.503754   40503 docker.go:318] overlay module found
	I1004 03:11:07.507194   40503 out.go:177] * Using the docker driver based on existing profile
	I1004 03:11:07.510511   40503 start.go:297] selected driver: docker
	I1004 03:11:07.510530   40503 start.go:901] validating driver "docker" against &{Name:functional-154453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-154453 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:11:07.510647   40503 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:11:07.513773   40503 out.go:201] 
	W1004 03:11:07.516650   40503 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1004 03:11:07.519516   40503 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-154453 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-154453 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-154453 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (182.736027ms)

                                                
                                                
-- stdout --
	* [functional-154453] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-2238/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-2238/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:11:07.186867   40458 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:11:07.187048   40458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:11:07.187057   40458 out.go:358] Setting ErrFile to fd 2...
	I1004 03:11:07.187061   40458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:11:07.187825   40458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-2238/.minikube/bin
	I1004 03:11:07.188225   40458 out.go:352] Setting JSON to false
	I1004 03:11:07.189096   40458 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3213,"bootTime":1728008255,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1004 03:11:07.189171   40458 start.go:139] virtualization:  
	I1004 03:11:07.194049   40458 out.go:177] * [functional-154453] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1004 03:11:07.196910   40458 notify.go:220] Checking for updates...
	I1004 03:11:07.199730   40458 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:11:07.202433   40458 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:11:07.205033   40458 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-2238/kubeconfig
	I1004 03:11:07.207697   40458 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-2238/.minikube
	I1004 03:11:07.210307   40458 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1004 03:11:07.212779   40458 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:11:07.215941   40458 config.go:182] Loaded profile config "functional-154453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:11:07.216473   40458 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:11:07.238897   40458 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1004 03:11:07.239022   40458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:11:07.300476   40458 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-04 03:11:07.290590414 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:11:07.300588   40458 docker.go:318] overlay module found
	I1004 03:11:07.303508   40458 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1004 03:11:07.306090   40458 start.go:297] selected driver: docker
	I1004 03:11:07.306107   40458 start.go:901] validating driver "docker" against &{Name:functional-154453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-154453 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:11:07.306222   40458 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:11:07.309254   40458 out.go:201] 
	W1004 03:11:07.311782   40458 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1004 03:11:07.314363   40458 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-154453 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-154453 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-g4qwn" [2cee482c-d397-4e70-b0a0-1db11bf833a9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-g4qwn" [2cee482c-d397-4e70-b0a0-1db11bf833a9] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004477257s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32421
functional_test.go:1675: http://192.168.49.2:32421: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-g4qwn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32421
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [91914fef-582c-43b7-9524-012ec0aa76ac] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004053216s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-154453 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-154453 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-154453 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-154453 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d8e04b9b-2aa0-4a2e-9829-525f1931a408] Pending
helpers_test.go:344: "sp-pod" [d8e04b9b-2aa0-4a2e-9829-525f1931a408] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d8e04b9b-2aa0-4a2e-9829-525f1931a408] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003454476s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-154453 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-154453 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-154453 delete -f testdata/storage-provisioner/pod.yaml: (1.222153909s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-154453 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [be1123c0-fbb3-4d72-af2a-5130c0ba6df4] Pending
helpers_test.go:344: "sp-pod" [be1123c0-fbb3-4d72-af2a-5130c0ba6df4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003711078s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-154453 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.21s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh -n functional-154453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 cp functional-154453:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1037707420/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh -n functional-154453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh -n functional-154453 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7560/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "sudo cat /etc/test/nested/copy/7560/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7560.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "sudo cat /etc/ssl/certs/7560.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7560.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "sudo cat /usr/share/ca-certificates/7560.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/75602.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "sudo cat /etc/ssl/certs/75602.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/75602.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "sudo cat /usr/share/ca-certificates/75602.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-154453 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-154453 ssh "sudo systemctl is-active docker": exit status 1 (296.328338ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-154453 ssh "sudo systemctl is-active containerd": exit status 1 (283.035758ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
2024/10/04 03:11:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-154453 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-154453 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-154453 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 38231: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-154453 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-154453 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-154453 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [4de3a7ed-9c4a-4916-be02-f44701ca05e0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [4de3a7ed-9c4a-4916-be02-f44701ca05e0] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004535245s
I1004 03:10:45.008012    7560 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-154453 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.23.255 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-154453 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-154453 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-154453 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-tmqnz" [810f357d-c92a-4688-90b0-76f4959e25bf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-tmqnz" [810f357d-c92a-4688-90b0-76f4959e25bf] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.00420479s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "386.323043ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "62.900949ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 service list -o json
functional_test.go:1494: Took "588.164169ms" to run "out/minikube-linux-arm64 -p functional-154453 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "417.50936ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "90.673189ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31414
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-154453 /tmp/TestFunctionalparallelMountCmdany-port495593750/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728011464764698180" to /tmp/TestFunctionalparallelMountCmdany-port495593750/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728011464764698180" to /tmp/TestFunctionalparallelMountCmdany-port495593750/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728011464764698180" to /tmp/TestFunctionalparallelMountCmdany-port495593750/001/test-1728011464764698180
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-154453 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (395.693413ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1004 03:11:05.160680    7560 retry.go:31] will retry after 681.769263ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  4 03:11 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  4 03:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  4 03:11 test-1728011464764698180
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh cat /mount-9p/test-1728011464764698180
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-154453 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [312add30-008e-4bf2-a212-1c33c0949593] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [312add30-008e-4bf2-a212-1c33c0949593] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [312add30-008e-4bf2-a212-1c33c0949593] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.0041275s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-154453 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-154453 /tmp/TestFunctionalparallelMountCmdany-port495593750/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31414
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-154453 /tmp/TestFunctionalparallelMountCmdspecific-port2605059060/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-154453 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (526.233151ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1004 03:11:14.869043    7560 retry.go:31] will retry after 403.209304ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-154453 /tmp/TestFunctionalparallelMountCmdspecific-port2605059060/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-154453 ssh "sudo umount -f /mount-9p": exit status 1 (322.856854ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-154453 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-154453 /tmp/TestFunctionalparallelMountCmdspecific-port2605059060/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-154453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1356220605/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-154453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1356220605/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-154453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1356220605/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-154453 ssh "findmnt -T" /mount1: exit status 1 (853.433981ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1004 03:11:17.397257    7560 retry.go:31] will retry after 338.378626ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-154453 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-154453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1356220605/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-154453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1356220605/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-154453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1356220605/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-154453 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-154453
localhost/kicbase/echo-server:functional-154453
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-154453 image ls --format short --alsologtostderr:
I1004 03:11:26.129084   43356 out.go:345] Setting OutFile to fd 1 ...
I1004 03:11:26.129309   43356 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:11:26.129322   43356 out.go:358] Setting ErrFile to fd 2...
I1004 03:11:26.129328   43356 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:11:26.129583   43356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-2238/.minikube/bin
I1004 03:11:26.130211   43356 config.go:182] Loaded profile config "functional-154453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1004 03:11:26.130338   43356 config.go:182] Loaded profile config "functional-154453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1004 03:11:26.130818   43356 cli_runner.go:164] Run: docker container inspect functional-154453 --format={{.State.Status}}
I1004 03:11:26.164043   43356 ssh_runner.go:195] Run: systemctl --version
I1004 03:11:26.164095   43356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-154453
I1004 03:11:26.199739   43356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/functional-154453/id_rsa Username:docker}
I1004 03:11:26.297897   43356 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-154453 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager | v1.31.1            | 279f381cb3736 | 86.9MB |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| docker.io/library/nginx                 | alpine             | 577a23b5858b9 | 52.3MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/minikube-local-cache-test     | functional-154453  | ac31fc4704aaa | 3.33kB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | d3f53a98c0a9d | 92.6MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 24a140c548c07 | 96MB   |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/library/nginx                 | latest             | 048e090385966 | 201MB  |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| localhost/kicbase/echo-server           | functional-154453  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 7f8aa378bb47d | 67MB   |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-154453 image ls --format table --alsologtostderr:
I1004 03:11:26.462755   43425 out.go:345] Setting OutFile to fd 1 ...
I1004 03:11:26.462930   43425 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:11:26.462943   43425 out.go:358] Setting ErrFile to fd 2...
I1004 03:11:26.462950   43425 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:11:26.463247   43425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-2238/.minikube/bin
I1004 03:11:26.463921   43425 config.go:182] Loaded profile config "functional-154453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1004 03:11:26.464088   43425 config.go:182] Loaded profile config "functional-154453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1004 03:11:26.464657   43425 cli_runner.go:164] Run: docker container inspect functional-154453 --format={{.State.Status}}
I1004 03:11:26.484230   43425 ssh_runner.go:195] Run: systemctl --version
I1004 03:11:26.484287   43425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-154453
I1004 03:11:26.510840   43425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/functional-154453/id_rsa Username:docker}
I1004 03:11:26.609895   43425 ssh_runner.go:195] Run: sudo crictl images --output json
E1004 03:11:26.695764    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:11:26.717112    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:11:26.758534    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:11:26.839909    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-154453 image ls --format json --alsologtostderr:
[{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb","registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"92632544"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"95951255"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fb
bb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-154453"],"size":"4788229"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"27e3830e1402783674d8b
594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9","repoDigests":["docker.io/library/nginx@sha256:96c43ba316370e0c1d1810b9693e647cc62a172a842d77888c299f3944922491","docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"200984127"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849"],"repoTags":["registry.k8s.io/kube-contro
ller-manager:v1.31.1"],"size":"86930758"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690","registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67007814"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-s
craper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":["docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250","docker.io/library/nginx@sha256:d1f949a77b81762af560a6e8f3f2bc2817f1c575ede5a756749e3c5d459e6478"],"repoTags":["docker.io/library/nginx:alpine"],"size":"52254450"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/s
torage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ac31fc4704aaaca6c9d195339298e95ece0e16bfd0ab51ca961535afc77e9959","repoDigests":["localhost/minikube-local-cache-test@sha256:52fa5531a24ca209c2bd0ed43467ad07e0d64a1fb02935e63f0e5dddf4dc48d6"],"repoTags":["localhost/minikube-local-cache-test:functional-154453"],"size":"3330"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:
ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-154453 image ls --format json --alsologtostderr:
I1004 03:11:26.431781   43421 out.go:345] Setting OutFile to fd 1 ...
I1004 03:11:26.431971   43421 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:11:26.431999   43421 out.go:358] Setting ErrFile to fd 2...
I1004 03:11:26.432024   43421 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:11:26.432333   43421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-2238/.minikube/bin
I1004 03:11:26.432996   43421 config.go:182] Loaded profile config "functional-154453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1004 03:11:26.433154   43421 config.go:182] Loaded profile config "functional-154453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1004 03:11:26.433701   43421 cli_runner.go:164] Run: docker container inspect functional-154453 --format={{.State.Status}}
I1004 03:11:26.458602   43421 ssh_runner.go:195] Run: systemctl --version
I1004 03:11:26.458655   43421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-154453
I1004 03:11:26.488406   43421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/functional-154453/id_rsa Username:docker}
I1004 03:11:26.581559   43421 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-154453 image ls --format yaml --alsologtostderr:
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ac31fc4704aaaca6c9d195339298e95ece0e16bfd0ab51ca961535afc77e9959
repoDigests:
- localhost/minikube-local-cache-test@sha256:52fa5531a24ca209c2bd0ed43467ad07e0d64a1fb02935e63f0e5dddf4dc48d6
repoTags:
- localhost/minikube-local-cache-test:functional-154453
size: "3330"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests:
- docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
- docker.io/library/nginx@sha256:d1f949a77b81762af560a6e8f3f2bc2817f1c575ede5a756749e3c5d459e6478
repoTags:
- docker.io/library/nginx:alpine
size: "52254450"
- id: 048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9
repoDigests:
- docker.io/library/nginx@sha256:96c43ba316370e0c1d1810b9693e647cc62a172a842d77888c299f3944922491
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "200984127"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "95951255"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "86930758"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67007814"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-154453
size: "4788229"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
- registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "92632544"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-154453 image ls --format yaml --alsologtostderr:
I1004 03:11:26.151985   43357 out.go:345] Setting OutFile to fd 1 ...
I1004 03:11:26.152209   43357 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:11:26.152238   43357 out.go:358] Setting ErrFile to fd 2...
I1004 03:11:26.152260   43357 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:11:26.152600   43357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-2238/.minikube/bin
I1004 03:11:26.153349   43357 config.go:182] Loaded profile config "functional-154453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1004 03:11:26.153523   43357 config.go:182] Loaded profile config "functional-154453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1004 03:11:26.154102   43357 cli_runner.go:164] Run: docker container inspect functional-154453 --format={{.State.Status}}
I1004 03:11:26.176113   43357 ssh_runner.go:195] Run: systemctl --version
I1004 03:11:26.176189   43357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-154453
I1004 03:11:26.206948   43357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/functional-154453/id_rsa Username:docker}
I1004 03:11:26.301940   43357 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 ssh pgrep buildkitd
E1004 03:11:26.675959    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:11:26.684009    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-154453 ssh pgrep buildkitd: exit status 1 (264.660099ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 image build -t localhost/my-image:functional-154453 testdata/build --alsologtostderr
E1004 03:11:27.001552    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:11:27.323080    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:11:27.965008    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:11:29.246968    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-154453 image build -t localhost/my-image:functional-154453 testdata/build --alsologtostderr: (3.23050744s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-154453 image build -t localhost/my-image:functional-154453 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e752f65d438
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-154453
--> 4bd79d35a0d
Successfully tagged localhost/my-image:functional-154453
4bd79d35a0d3203426084e9df8ff8963cf6cd518bd88ec94f3c3ded817bdf85f
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-154453 image build -t localhost/my-image:functional-154453 testdata/build --alsologtostderr:
I1004 03:11:26.939699   43543 out.go:345] Setting OutFile to fd 1 ...
I1004 03:11:26.939877   43543 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:11:26.939887   43543 out.go:358] Setting ErrFile to fd 2...
I1004 03:11:26.939892   43543 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:11:26.940121   43543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-2238/.minikube/bin
I1004 03:11:26.940746   43543 config.go:182] Loaded profile config "functional-154453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1004 03:11:26.942115   43543 config.go:182] Loaded profile config "functional-154453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1004 03:11:26.942673   43543 cli_runner.go:164] Run: docker container inspect functional-154453 --format={{.State.Status}}
I1004 03:11:26.959264   43543 ssh_runner.go:195] Run: systemctl --version
I1004 03:11:26.959323   43543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-154453
I1004 03:11:26.976653   43543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/functional-154453/id_rsa Username:docker}
I1004 03:11:27.073834   43543 build_images.go:161] Building image from path: /tmp/build.1778626014.tar
I1004 03:11:27.073901   43543 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1004 03:11:27.082978   43543 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1778626014.tar
I1004 03:11:27.086518   43543 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1778626014.tar: stat -c "%s %y" /var/lib/minikube/build/build.1778626014.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1778626014.tar': No such file or directory
I1004 03:11:27.086548   43543 ssh_runner.go:362] scp /tmp/build.1778626014.tar --> /var/lib/minikube/build/build.1778626014.tar (3072 bytes)
I1004 03:11:27.110907   43543 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1778626014
I1004 03:11:27.120119   43543 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1778626014 -xf /var/lib/minikube/build/build.1778626014.tar
I1004 03:11:27.129821   43543 crio.go:315] Building image: /var/lib/minikube/build/build.1778626014
I1004 03:11:27.129921   43543 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-154453 /var/lib/minikube/build/build.1778626014 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1004 03:11:30.092506   43543 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-154453 /var/lib/minikube/build/build.1778626014 --cgroup-manager=cgroupfs: (2.962549946s)
I1004 03:11:30.092585   43543 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1778626014
I1004 03:11:30.104059   43543 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1778626014.tar
I1004 03:11:30.114313   43543 build_images.go:217] Built localhost/my-image:functional-154453 from /tmp/build.1778626014.tar
I1004 03:11:30.114357   43543 build_images.go:133] succeeded building to: functional-154453
I1004 03:11:30.114364   43543 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-154453
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 image load --daemon kicbase/echo-server:functional-154453 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-154453 image load --daemon kicbase/echo-server:functional-154453 --alsologtostderr: (1.438332849s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 image load --daemon kicbase/echo-server:functional-154453 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-154453
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 image load --daemon kicbase/echo-server:functional-154453 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 image save kicbase/echo-server:functional-154453 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 image rm kicbase/echo-server:functional-154453 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-154453
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-154453 image save --daemon kicbase/echo-server:functional-154453 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-154453
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-154453
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-154453
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-154453
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (172.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-481241 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1004 03:11:36.930079    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:11:47.172225    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:12:07.653632    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:12:48.615691    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:14:10.537638    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-481241 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m51.370872438s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (172.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481241 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481241 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-481241 -- rollout status deployment/busybox: (5.183860638s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481241 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481241 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481241 -- exec busybox-7dff88458-24zpz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481241 -- exec busybox-7dff88458-76fqw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481241 -- exec busybox-7dff88458-fb8qp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481241 -- exec busybox-7dff88458-24zpz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481241 -- exec busybox-7dff88458-76fqw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481241 -- exec busybox-7dff88458-fb8qp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481241 -- exec busybox-7dff88458-24zpz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481241 -- exec busybox-7dff88458-76fqw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481241 -- exec busybox-7dff88458-fb8qp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481241 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481241 -- exec busybox-7dff88458-24zpz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481241 -- exec busybox-7dff88458-24zpz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481241 -- exec busybox-7dff88458-76fqw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481241 -- exec busybox-7dff88458-76fqw -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481241 -- exec busybox-7dff88458-fb8qp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481241 -- exec busybox-7dff88458-fb8qp -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (63.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-481241 -v=7 --alsologtostderr
E1004 03:15:36.507756    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:15:36.514217    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:15:36.525666    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:15:36.547061    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:15:36.589272    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:15:36.670716    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:15:36.832410    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:15:37.153725    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-481241 -v=7 --alsologtostderr: (1m2.452792422s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 status -v=7 --alsologtostderr
E1004 03:15:37.795739    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (63.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-481241 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1004 03:15:39.077056    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.000613653s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 cp testdata/cp-test.txt ha-481241:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 cp ha-481241:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1073171230/001/cp-test_ha-481241.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 cp ha-481241:/home/docker/cp-test.txt ha-481241-m02:/home/docker/cp-test_ha-481241_ha-481241-m02.txt
E1004 03:15:41.639056    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m02 "sudo cat /home/docker/cp-test_ha-481241_ha-481241-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 cp ha-481241:/home/docker/cp-test.txt ha-481241-m03:/home/docker/cp-test_ha-481241_ha-481241-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m03 "sudo cat /home/docker/cp-test_ha-481241_ha-481241-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 cp ha-481241:/home/docker/cp-test.txt ha-481241-m04:/home/docker/cp-test_ha-481241_ha-481241-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m04 "sudo cat /home/docker/cp-test_ha-481241_ha-481241-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 cp testdata/cp-test.txt ha-481241-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 cp ha-481241-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1073171230/001/cp-test_ha-481241-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 cp ha-481241-m02:/home/docker/cp-test.txt ha-481241:/home/docker/cp-test_ha-481241-m02_ha-481241.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241 "sudo cat /home/docker/cp-test_ha-481241-m02_ha-481241.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 cp ha-481241-m02:/home/docker/cp-test.txt ha-481241-m03:/home/docker/cp-test_ha-481241-m02_ha-481241-m03.txt
E1004 03:15:46.760729    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m03 "sudo cat /home/docker/cp-test_ha-481241-m02_ha-481241-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 cp ha-481241-m02:/home/docker/cp-test.txt ha-481241-m04:/home/docker/cp-test_ha-481241-m02_ha-481241-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m04 "sudo cat /home/docker/cp-test_ha-481241-m02_ha-481241-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 cp testdata/cp-test.txt ha-481241-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 cp ha-481241-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1073171230/001/cp-test_ha-481241-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 cp ha-481241-m03:/home/docker/cp-test.txt ha-481241:/home/docker/cp-test_ha-481241-m03_ha-481241.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241 "sudo cat /home/docker/cp-test_ha-481241-m03_ha-481241.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 cp ha-481241-m03:/home/docker/cp-test.txt ha-481241-m02:/home/docker/cp-test_ha-481241-m03_ha-481241-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m02 "sudo cat /home/docker/cp-test_ha-481241-m03_ha-481241-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 cp ha-481241-m03:/home/docker/cp-test.txt ha-481241-m04:/home/docker/cp-test_ha-481241-m03_ha-481241-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m04 "sudo cat /home/docker/cp-test_ha-481241-m03_ha-481241-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 cp testdata/cp-test.txt ha-481241-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 cp ha-481241-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1073171230/001/cp-test_ha-481241-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 cp ha-481241-m04:/home/docker/cp-test.txt ha-481241:/home/docker/cp-test_ha-481241-m04_ha-481241.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241 "sudo cat /home/docker/cp-test_ha-481241-m04_ha-481241.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 cp ha-481241-m04:/home/docker/cp-test.txt ha-481241-m02:/home/docker/cp-test_ha-481241-m04_ha-481241-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m02 "sudo cat /home/docker/cp-test_ha-481241-m04_ha-481241-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 cp ha-481241-m04:/home/docker/cp-test.txt ha-481241-m03:/home/docker/cp-test_ha-481241-m04_ha-481241-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 ssh -n ha-481241-m03 "sudo cat /home/docker/cp-test_ha-481241-m04_ha-481241-m03.txt"
E1004 03:15:57.002929    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-481241 node stop m02 -v=7 --alsologtostderr: (11.966434311s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-481241 status -v=7 --alsologtostderr: exit status 7 (723.644519ms)

                                                
                                                
-- stdout --
	ha-481241
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-481241-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-481241-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-481241-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:16:09.298056   59212 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:16:09.298188   59212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:16:09.298199   59212 out.go:358] Setting ErrFile to fd 2...
	I1004 03:16:09.298205   59212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:16:09.298449   59212 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-2238/.minikube/bin
	I1004 03:16:09.298636   59212 out.go:352] Setting JSON to false
	I1004 03:16:09.298670   59212 mustload.go:65] Loading cluster: ha-481241
	I1004 03:16:09.299095   59212 config.go:182] Loaded profile config "ha-481241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:16:09.299116   59212 status.go:174] checking status of ha-481241 ...
	I1004 03:16:09.299684   59212 cli_runner.go:164] Run: docker container inspect ha-481241 --format={{.State.Status}}
	I1004 03:16:09.299955   59212 notify.go:220] Checking for updates...
	I1004 03:16:09.318558   59212 status.go:371] ha-481241 host status = "Running" (err=<nil>)
	I1004 03:16:09.318602   59212 host.go:66] Checking if "ha-481241" exists ...
	I1004 03:16:09.318967   59212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481241
	I1004 03:16:09.349266   59212 host.go:66] Checking if "ha-481241" exists ...
	I1004 03:16:09.349593   59212 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 03:16:09.349654   59212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241
	I1004 03:16:09.367581   59212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/ha-481241/id_rsa Username:docker}
	I1004 03:16:09.462658   59212 ssh_runner.go:195] Run: systemctl --version
	I1004 03:16:09.467236   59212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:16:09.480317   59212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:16:09.530878   59212 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:71 SystemTime:2024-10-04 03:16:09.519449535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:16:09.531565   59212 kubeconfig.go:125] found "ha-481241" server: "https://192.168.49.254:8443"
	I1004 03:16:09.531600   59212 api_server.go:166] Checking apiserver status ...
	I1004 03:16:09.531662   59212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 03:16:09.543348   59212 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	I1004 03:16:09.553695   59212 api_server.go:182] apiserver freezer: "11:freezer:/docker/462f298eff4281286995e3854193dfffc664e65c1babc48f0a6c0308f78e6495/crio/crio-d03265150ae8a6ddf0958c05d952f1b7962515e9174393d3dd1fb8e265ece9bc"
	I1004 03:16:09.553771   59212 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/462f298eff4281286995e3854193dfffc664e65c1babc48f0a6c0308f78e6495/crio/crio-d03265150ae8a6ddf0958c05d952f1b7962515e9174393d3dd1fb8e265ece9bc/freezer.state
	I1004 03:16:09.565052   59212 api_server.go:204] freezer state: "THAWED"
	I1004 03:16:09.565080   59212 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1004 03:16:09.575749   59212 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1004 03:16:09.575780   59212 status.go:463] ha-481241 apiserver status = Running (err=<nil>)
	I1004 03:16:09.575791   59212 status.go:176] ha-481241 status: &{Name:ha-481241 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1004 03:16:09.575808   59212 status.go:174] checking status of ha-481241-m02 ...
	I1004 03:16:09.576135   59212 cli_runner.go:164] Run: docker container inspect ha-481241-m02 --format={{.State.Status}}
	I1004 03:16:09.594125   59212 status.go:371] ha-481241-m02 host status = "Stopped" (err=<nil>)
	I1004 03:16:09.594148   59212 status.go:384] host is not running, skipping remaining checks
	I1004 03:16:09.594156   59212 status.go:176] ha-481241-m02 status: &{Name:ha-481241-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1004 03:16:09.594178   59212 status.go:174] checking status of ha-481241-m03 ...
	I1004 03:16:09.594521   59212 cli_runner.go:164] Run: docker container inspect ha-481241-m03 --format={{.State.Status}}
	I1004 03:16:09.611455   59212 status.go:371] ha-481241-m03 host status = "Running" (err=<nil>)
	I1004 03:16:09.611480   59212 host.go:66] Checking if "ha-481241-m03" exists ...
	I1004 03:16:09.611891   59212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481241-m03
	I1004 03:16:09.631829   59212 host.go:66] Checking if "ha-481241-m03" exists ...
	I1004 03:16:09.632199   59212 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 03:16:09.632280   59212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241-m03
	I1004 03:16:09.649393   59212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/ha-481241-m03/id_rsa Username:docker}
	I1004 03:16:09.742709   59212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:16:09.755921   59212 kubeconfig.go:125] found "ha-481241" server: "https://192.168.49.254:8443"
	I1004 03:16:09.755956   59212 api_server.go:166] Checking apiserver status ...
	I1004 03:16:09.756006   59212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 03:16:09.767775   59212 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1326/cgroup
	I1004 03:16:09.778412   59212 api_server.go:182] apiserver freezer: "11:freezer:/docker/144d9cadb7cd15acf9294e2b09c7b62e0381e57822c4201cfafde60b736e2cc2/crio/crio-f2d5d7b0d75df6e5f0693ae66e8538f70adb01318676edba7c76210daeefae78"
	I1004 03:16:09.778510   59212 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/144d9cadb7cd15acf9294e2b09c7b62e0381e57822c4201cfafde60b736e2cc2/crio/crio-f2d5d7b0d75df6e5f0693ae66e8538f70adb01318676edba7c76210daeefae78/freezer.state
	I1004 03:16:09.787719   59212 api_server.go:204] freezer state: "THAWED"
	I1004 03:16:09.787747   59212 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1004 03:16:09.795409   59212 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1004 03:16:09.795435   59212 status.go:463] ha-481241-m03 apiserver status = Running (err=<nil>)
	I1004 03:16:09.795444   59212 status.go:176] ha-481241-m03 status: &{Name:ha-481241-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1004 03:16:09.795460   59212 status.go:174] checking status of ha-481241-m04 ...
	I1004 03:16:09.795809   59212 cli_runner.go:164] Run: docker container inspect ha-481241-m04 --format={{.State.Status}}
	I1004 03:16:09.815007   59212 status.go:371] ha-481241-m04 host status = "Running" (err=<nil>)
	I1004 03:16:09.815033   59212 host.go:66] Checking if "ha-481241-m04" exists ...
	I1004 03:16:09.815331   59212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481241-m04
	I1004 03:16:09.832855   59212 host.go:66] Checking if "ha-481241-m04" exists ...
	I1004 03:16:09.833172   59212 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 03:16:09.833256   59212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481241-m04
	I1004 03:16:09.850910   59212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/ha-481241-m04/id_rsa Username:docker}
	I1004 03:16:09.946340   59212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:16:09.959068   59212 status.go:176] ha-481241-m04 status: &{Name:ha-481241-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (33.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 node start m02 -v=7 --alsologtostderr
E1004 03:16:17.485348    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:16:26.675790    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-481241 node start m02 -v=7 --alsologtostderr: (31.838253284s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-481241 status -v=7 --alsologtostderr: (1.266807363s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (33.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.307399505s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (214.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-481241 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-481241 -v=7 --alsologtostderr
E1004 03:16:54.383753    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:16:58.447606    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-481241 -v=7 --alsologtostderr: (37.199281166s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-481241 --wait=true -v=7 --alsologtostderr
E1004 03:18:20.369863    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-481241 --wait=true -v=7 --alsologtostderr: (2m57.37284948s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-481241
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (214.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-481241 node delete m03 -v=7 --alsologtostderr: (12.006033169s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 stop -v=7 --alsologtostderr
E1004 03:20:36.507620    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:21:04.211256    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-481241 stop -v=7 --alsologtostderr: (35.668394145s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-481241 status -v=7 --alsologtostderr: exit status 7 (113.003498ms)

                                                
                                                
-- stdout --
	ha-481241
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-481241-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-481241-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:21:09.343716   73746 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:21:09.343834   73746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:21:09.343843   73746 out.go:358] Setting ErrFile to fd 2...
	I1004 03:21:09.343849   73746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:21:09.344118   73746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-2238/.minikube/bin
	I1004 03:21:09.344326   73746 out.go:352] Setting JSON to false
	I1004 03:21:09.344377   73746 mustload.go:65] Loading cluster: ha-481241
	I1004 03:21:09.344433   73746 notify.go:220] Checking for updates...
	I1004 03:21:09.344840   73746 config.go:182] Loaded profile config "ha-481241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:21:09.344854   73746 status.go:174] checking status of ha-481241 ...
	I1004 03:21:09.345732   73746 cli_runner.go:164] Run: docker container inspect ha-481241 --format={{.State.Status}}
	I1004 03:21:09.363197   73746 status.go:371] ha-481241 host status = "Stopped" (err=<nil>)
	I1004 03:21:09.363224   73746 status.go:384] host is not running, skipping remaining checks
	I1004 03:21:09.363231   73746 status.go:176] ha-481241 status: &{Name:ha-481241 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1004 03:21:09.363265   73746 status.go:174] checking status of ha-481241-m02 ...
	I1004 03:21:09.363580   73746 cli_runner.go:164] Run: docker container inspect ha-481241-m02 --format={{.State.Status}}
	I1004 03:21:09.379600   73746 status.go:371] ha-481241-m02 host status = "Stopped" (err=<nil>)
	I1004 03:21:09.379623   73746 status.go:384] host is not running, skipping remaining checks
	I1004 03:21:09.379630   73746 status.go:176] ha-481241-m02 status: &{Name:ha-481241-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1004 03:21:09.379652   73746 status.go:174] checking status of ha-481241-m04 ...
	I1004 03:21:09.379982   73746 cli_runner.go:164] Run: docker container inspect ha-481241-m04 --format={{.State.Status}}
	I1004 03:21:09.407467   73746 status.go:371] ha-481241-m04 host status = "Stopped" (err=<nil>)
	I1004 03:21:09.407490   73746 status.go:384] host is not running, skipping remaining checks
	I1004 03:21:09.407498   73746 status.go:176] ha-481241-m04 status: &{Name:ha-481241-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (71.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-481241 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-481241 --control-plane -v=7 --alsologtostderr: (1m10.380055252s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-481241 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (71.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.00s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.51s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-345019 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-345019 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (48.499533649s)
--- PASS: TestJSONOutput/start/Command (48.51s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-345019 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-345019 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-345019 --output=json --user=testUser
E1004 03:25:36.507267    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-345019 --output=json --user=testUser: (5.854045056s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-487213 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-487213 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (91.983019ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d1c05927-34d8-49f3-a7b0-cb7ae5cf618d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-487213] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"283a937c-7a80-4070-8aac-89102b3f62bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19546"}}
	{"specversion":"1.0","id":"6fc6d39a-68df-41c1-89b0-509bfd8e8e98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3ff7bf73-136e-4fdb-8158-2ff08c80aaa7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19546-2238/kubeconfig"}}
	{"specversion":"1.0","id":"03f7f979-4840-41e7-8e23-f016d1d12347","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-2238/.minikube"}}
	{"specversion":"1.0","id":"cdc65118-1a70-416d-8b70-a27c4eb5d879","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4ebcb86a-3722-4afc-978b-23cdaaa5d79b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"38cccf53-fecf-4baa-8b51-570d0e76d95d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-487213" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-487213
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.49s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-594322 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-594322 --network=: (37.430569046s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-594322" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-594322
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-594322: (2.040690913s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.49s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-611075 --network=bridge
E1004 03:26:26.675491    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-611075 --network=bridge: (34.058730876s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-611075" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-611075
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-611075: (1.990700083s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.08s)

                                                
                                    
x
+
TestKicExistingNetwork (31.04s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1004 03:26:59.048373    7560 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1004 03:26:59.063690    7560 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1004 03:26:59.064428    7560 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1004 03:26:59.064465    7560 cli_runner.go:164] Run: docker network inspect existing-network
W1004 03:26:59.079858    7560 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1004 03:26:59.079889    7560 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1004 03:26:59.079906    7560 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1004 03:26:59.080013    7560 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1004 03:26:59.097455    7560 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c36a525dd0a7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:8c:4d:81:d6} reservation:<nil>}
I1004 03:26:59.097844    7560 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400142ff40}
I1004 03:26:59.097874    7560 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1004 03:26:59.097937    7560 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1004 03:26:59.176622    7560 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-647011 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-647011 --network=existing-network: (28.971630738s)
helpers_test.go:175: Cleaning up "existing-network-647011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-647011
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-647011: (1.905301115s)
I1004 03:27:30.069370    7560 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (31.04s)

                                                
                                    
x
+
TestKicCustomSubnet (31.72s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-585466 --subnet=192.168.60.0/24
E1004 03:27:49.745096    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-585466 --subnet=192.168.60.0/24: (29.588754328s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-585466 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-585466" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-585466
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-585466: (2.110404925s)
--- PASS: TestKicCustomSubnet (31.72s)

                                                
                                    
x
+
TestKicStaticIP (32.69s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-403122 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-403122 --static-ip=192.168.200.200: (30.539455691s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-403122 ip
helpers_test.go:175: Cleaning up "static-ip-403122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-403122
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-403122: (2.011837207s)
--- PASS: TestKicStaticIP (32.69s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.18s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-135993 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-135993 --driver=docker  --container-runtime=crio: (29.639166521s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-138627 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-138627 --driver=docker  --container-runtime=crio: (33.36509968s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-135993
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-138627
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-138627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-138627
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-138627: (1.948121743s)
helpers_test.go:175: Cleaning up "first-135993" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-135993
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-135993: (1.926222342s)
--- PASS: TestMinikubeProfile (68.18s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-867098 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-867098 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.347161482s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-867098 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-869018 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-869018 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.702842384s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-869018 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-867098 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-867098 --alsologtostderr -v=5: (1.647658459s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-869018 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-869018
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-869018: (1.20653654s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.79s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-869018
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-869018: (6.791155908s)
--- PASS: TestMountStart/serial/RestartStopped (7.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-869018 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-304631 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1004 03:30:36.507347    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:31:26.675052    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-304631 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m46.167419311s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-304631 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-304631 -- rollout status deployment/busybox
E1004 03:31:59.573370    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-304631 -- rollout status deployment/busybox: (4.555787069s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-304631 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-304631 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-304631 -- exec busybox-7dff88458-jlqhz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-304631 -- exec busybox-7dff88458-xttd8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-304631 -- exec busybox-7dff88458-jlqhz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-304631 -- exec busybox-7dff88458-xttd8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-304631 -- exec busybox-7dff88458-jlqhz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-304631 -- exec busybox-7dff88458-xttd8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.32s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-304631 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-304631 -- exec busybox-7dff88458-jlqhz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-304631 -- exec busybox-7dff88458-jlqhz -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-304631 -- exec busybox-7dff88458-xttd8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-304631 -- exec busybox-7dff88458-xttd8 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-304631 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-304631 -v 3 --alsologtostderr: (28.0855483s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.72s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-304631 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 cp testdata/cp-test.txt multinode-304631:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 ssh -n multinode-304631 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 cp multinode-304631:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2039539318/001/cp-test_multinode-304631.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 ssh -n multinode-304631 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 cp multinode-304631:/home/docker/cp-test.txt multinode-304631-m02:/home/docker/cp-test_multinode-304631_multinode-304631-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 ssh -n multinode-304631 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 ssh -n multinode-304631-m02 "sudo cat /home/docker/cp-test_multinode-304631_multinode-304631-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 cp multinode-304631:/home/docker/cp-test.txt multinode-304631-m03:/home/docker/cp-test_multinode-304631_multinode-304631-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 ssh -n multinode-304631 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 ssh -n multinode-304631-m03 "sudo cat /home/docker/cp-test_multinode-304631_multinode-304631-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 cp testdata/cp-test.txt multinode-304631-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 ssh -n multinode-304631-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 cp multinode-304631-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2039539318/001/cp-test_multinode-304631-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 ssh -n multinode-304631-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 cp multinode-304631-m02:/home/docker/cp-test.txt multinode-304631:/home/docker/cp-test_multinode-304631-m02_multinode-304631.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 ssh -n multinode-304631-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 ssh -n multinode-304631 "sudo cat /home/docker/cp-test_multinode-304631-m02_multinode-304631.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 cp multinode-304631-m02:/home/docker/cp-test.txt multinode-304631-m03:/home/docker/cp-test_multinode-304631-m02_multinode-304631-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 ssh -n multinode-304631-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 ssh -n multinode-304631-m03 "sudo cat /home/docker/cp-test_multinode-304631-m02_multinode-304631-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 cp testdata/cp-test.txt multinode-304631-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 ssh -n multinode-304631-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 cp multinode-304631-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2039539318/001/cp-test_multinode-304631-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 ssh -n multinode-304631-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 cp multinode-304631-m03:/home/docker/cp-test.txt multinode-304631:/home/docker/cp-test_multinode-304631-m03_multinode-304631.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 ssh -n multinode-304631-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 ssh -n multinode-304631 "sudo cat /home/docker/cp-test_multinode-304631-m03_multinode-304631.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 cp multinode-304631-m03:/home/docker/cp-test.txt multinode-304631-m02:/home/docker/cp-test_multinode-304631-m03_multinode-304631-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 ssh -n multinode-304631-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 ssh -n multinode-304631-m02 "sudo cat /home/docker/cp-test_multinode-304631-m03_multinode-304631-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-304631 node stop m03: (1.211935888s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-304631 status: exit status 7 (516.285604ms)

                                                
                                                
-- stdout --
	multinode-304631
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-304631-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-304631-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-304631 status --alsologtostderr: exit status 7 (512.702562ms)

                                                
                                                
-- stdout --
	multinode-304631
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-304631-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-304631-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:32:47.079589  128107 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:32:47.079705  128107 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:32:47.079710  128107 out.go:358] Setting ErrFile to fd 2...
	I1004 03:32:47.079715  128107 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:32:47.079983  128107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-2238/.minikube/bin
	I1004 03:32:47.080186  128107 out.go:352] Setting JSON to false
	I1004 03:32:47.080213  128107 mustload.go:65] Loading cluster: multinode-304631
	I1004 03:32:47.080246  128107 notify.go:220] Checking for updates...
	I1004 03:32:47.080630  128107 config.go:182] Loaded profile config "multinode-304631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:32:47.080642  128107 status.go:174] checking status of multinode-304631 ...
	I1004 03:32:47.081516  128107 cli_runner.go:164] Run: docker container inspect multinode-304631 --format={{.State.Status}}
	I1004 03:32:47.099950  128107 status.go:371] multinode-304631 host status = "Running" (err=<nil>)
	I1004 03:32:47.099980  128107 host.go:66] Checking if "multinode-304631" exists ...
	I1004 03:32:47.100397  128107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-304631
	I1004 03:32:47.125355  128107 host.go:66] Checking if "multinode-304631" exists ...
	I1004 03:32:47.125668  128107 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 03:32:47.125727  128107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-304631
	I1004 03:32:47.149559  128107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/multinode-304631/id_rsa Username:docker}
	I1004 03:32:47.242568  128107 ssh_runner.go:195] Run: systemctl --version
	I1004 03:32:47.246727  128107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:32:47.258253  128107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:32:47.317467  128107 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-10-04 03:32:47.307401886 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:32:47.318050  128107 kubeconfig.go:125] found "multinode-304631" server: "https://192.168.67.2:8443"
	I1004 03:32:47.318087  128107 api_server.go:166] Checking apiserver status ...
	I1004 03:32:47.318133  128107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 03:32:47.328385  128107 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1408/cgroup
	I1004 03:32:47.337465  128107 api_server.go:182] apiserver freezer: "11:freezer:/docker/467965eee4fcac7c11b832d83aab9ad490a0a69ea59f8c50646e7ce33711752a/crio/crio-9de948f5fd6e04b8eaff6d7f51f59b3e259cc02d31f61bb0ecff4a5c81071dca"
	I1004 03:32:47.337542  128107 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/467965eee4fcac7c11b832d83aab9ad490a0a69ea59f8c50646e7ce33711752a/crio/crio-9de948f5fd6e04b8eaff6d7f51f59b3e259cc02d31f61bb0ecff4a5c81071dca/freezer.state
	I1004 03:32:47.347813  128107 api_server.go:204] freezer state: "THAWED"
	I1004 03:32:47.347841  128107 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1004 03:32:47.355562  128107 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1004 03:32:47.355642  128107 status.go:463] multinode-304631 apiserver status = Running (err=<nil>)
	I1004 03:32:47.355667  128107 status.go:176] multinode-304631 status: &{Name:multinode-304631 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1004 03:32:47.355713  128107 status.go:174] checking status of multinode-304631-m02 ...
	I1004 03:32:47.356071  128107 cli_runner.go:164] Run: docker container inspect multinode-304631-m02 --format={{.State.Status}}
	I1004 03:32:47.372083  128107 status.go:371] multinode-304631-m02 host status = "Running" (err=<nil>)
	I1004 03:32:47.372108  128107 host.go:66] Checking if "multinode-304631-m02" exists ...
	I1004 03:32:47.372526  128107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-304631-m02
	I1004 03:32:47.388722  128107 host.go:66] Checking if "multinode-304631-m02" exists ...
	I1004 03:32:47.389094  128107 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 03:32:47.389167  128107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-304631-m02
	I1004 03:32:47.406531  128107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19546-2238/.minikube/machines/multinode-304631-m02/id_rsa Username:docker}
	I1004 03:32:47.503937  128107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:32:47.517142  128107 status.go:176] multinode-304631-m02 status: &{Name:multinode-304631-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1004 03:32:47.517182  128107 status.go:174] checking status of multinode-304631-m03 ...
	I1004 03:32:47.517565  128107 cli_runner.go:164] Run: docker container inspect multinode-304631-m03 --format={{.State.Status}}
	I1004 03:32:47.533470  128107 status.go:371] multinode-304631-m03 host status = "Stopped" (err=<nil>)
	I1004 03:32:47.533492  128107 status.go:384] host is not running, skipping remaining checks
	I1004 03:32:47.533500  128107 status.go:176] multinode-304631-m03 status: &{Name:multinode-304631-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-304631 node start m03 -v=7 --alsologtostderr: (9.564888591s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (99.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-304631
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-304631
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-304631: (24.843502919s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-304631 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-304631 --wait=true -v=8 --alsologtostderr: (1m14.599426355s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-304631
--- PASS: TestMultiNode/serial/RestartKeepsNodes (99.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-304631 node delete m03: (4.770029208s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.43s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-304631 stop: (23.697311885s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-304631 status: exit status 7 (95.859539ms)

                                                
                                                
-- stdout --
	multinode-304631
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-304631-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-304631 status --alsologtostderr: exit status 7 (86.554332ms)

                                                
                                                
-- stdout --
	multinode-304631
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-304631-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:35:06.646607  135860 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:35:06.646772  135860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:35:06.646783  135860 out.go:358] Setting ErrFile to fd 2...
	I1004 03:35:06.646788  135860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:35:06.647063  135860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-2238/.minikube/bin
	I1004 03:35:06.647247  135860 out.go:352] Setting JSON to false
	I1004 03:35:06.647273  135860 mustload.go:65] Loading cluster: multinode-304631
	I1004 03:35:06.647368  135860 notify.go:220] Checking for updates...
	I1004 03:35:06.647685  135860 config.go:182] Loaded profile config "multinode-304631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:35:06.647702  135860 status.go:174] checking status of multinode-304631 ...
	I1004 03:35:06.648297  135860 cli_runner.go:164] Run: docker container inspect multinode-304631 --format={{.State.Status}}
	I1004 03:35:06.666478  135860 status.go:371] multinode-304631 host status = "Stopped" (err=<nil>)
	I1004 03:35:06.666504  135860 status.go:384] host is not running, skipping remaining checks
	I1004 03:35:06.666512  135860 status.go:176] multinode-304631 status: &{Name:multinode-304631 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1004 03:35:06.666557  135860 status.go:174] checking status of multinode-304631-m02 ...
	I1004 03:35:06.666904  135860 cli_runner.go:164] Run: docker container inspect multinode-304631-m02 --format={{.State.Status}}
	I1004 03:35:06.686716  135860 status.go:371] multinode-304631-m02 host status = "Stopped" (err=<nil>)
	I1004 03:35:06.686788  135860 status.go:384] host is not running, skipping remaining checks
	I1004 03:35:06.686798  135860 status.go:176] multinode-304631-m02 status: &{Name:multinode-304631-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-304631 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1004 03:35:36.507899    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-304631 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (53.765931135s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-304631 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.42s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-304631
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-304631-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-304631-m02 --driver=docker  --container-runtime=crio: exit status 14 (84.839171ms)

                                                
                                                
-- stdout --
	* [multinode-304631-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-2238/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-2238/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-304631-m02' is duplicated with machine name 'multinode-304631-m02' in profile 'multinode-304631'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-304631-m03 --driver=docker  --container-runtime=crio
E1004 03:36:26.675275    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-304631-m03 --driver=docker  --container-runtime=crio: (37.183301051s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-304631
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-304631: exit status 80 (311.008542ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-304631 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-304631-m03 already exists in multinode-304631-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-304631-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-304631-m03: (1.939254732s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.57s)

                                                
                                    
x
+
TestPreload (134.21s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-908809 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-908809 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m31.885893148s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-908809 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-908809 image pull gcr.io/k8s-minikube/busybox: (3.492970053s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-908809
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-908809: (5.735761354s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-908809 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-908809 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (30.560118284s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-908809 image list
helpers_test.go:175: Cleaning up "test-preload-908809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-908809
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-908809: (2.30760596s)
--- PASS: TestPreload (134.21s)

                                                
                                    
x
+
TestScheduledStopUnix (106.72s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-081291 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-081291 --memory=2048 --driver=docker  --container-runtime=crio: (30.488690168s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-081291 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-081291 -n scheduled-stop-081291
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-081291 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1004 03:39:29.797354    7560 retry.go:31] will retry after 74.013µs: open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/scheduled-stop-081291/pid: no such file or directory
I1004 03:39:29.798494    7560 retry.go:31] will retry after 162.793µs: open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/scheduled-stop-081291/pid: no such file or directory
I1004 03:39:29.798812    7560 retry.go:31] will retry after 181.76µs: open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/scheduled-stop-081291/pid: no such file or directory
I1004 03:39:29.799623    7560 retry.go:31] will retry after 227.901µs: open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/scheduled-stop-081291/pid: no such file or directory
I1004 03:39:29.800742    7560 retry.go:31] will retry after 316.801µs: open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/scheduled-stop-081291/pid: no such file or directory
I1004 03:39:29.801848    7560 retry.go:31] will retry after 977.035µs: open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/scheduled-stop-081291/pid: no such file or directory
I1004 03:39:29.802939    7560 retry.go:31] will retry after 1.653198ms: open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/scheduled-stop-081291/pid: no such file or directory
I1004 03:39:29.805095    7560 retry.go:31] will retry after 1.995404ms: open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/scheduled-stop-081291/pid: no such file or directory
I1004 03:39:29.807312    7560 retry.go:31] will retry after 2.598294ms: open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/scheduled-stop-081291/pid: no such file or directory
I1004 03:39:29.810495    7560 retry.go:31] will retry after 4.849942ms: open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/scheduled-stop-081291/pid: no such file or directory
I1004 03:39:29.815707    7560 retry.go:31] will retry after 6.108385ms: open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/scheduled-stop-081291/pid: no such file or directory
I1004 03:39:29.822981    7560 retry.go:31] will retry after 11.252462ms: open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/scheduled-stop-081291/pid: no such file or directory
I1004 03:39:29.835224    7560 retry.go:31] will retry after 10.101974ms: open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/scheduled-stop-081291/pid: no such file or directory
I1004 03:39:29.846492    7560 retry.go:31] will retry after 20.951879ms: open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/scheduled-stop-081291/pid: no such file or directory
I1004 03:39:29.868272    7560 retry.go:31] will retry after 15.011972ms: open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/scheduled-stop-081291/pid: no such file or directory
I1004 03:39:29.883534    7560 retry.go:31] will retry after 21.950483ms: open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/scheduled-stop-081291/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-081291 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-081291 -n scheduled-stop-081291
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-081291
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-081291 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1004 03:40:36.515222    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-081291
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-081291: exit status 7 (68.665213ms)

                                                
                                                
-- stdout --
	scheduled-stop-081291
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-081291 -n scheduled-stop-081291
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-081291 -n scheduled-stop-081291: exit status 7 (66.060935ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-081291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-081291
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-081291: (4.739393294s)
--- PASS: TestScheduledStopUnix (106.72s)

                                                
                                    
x
+
TestInsufficientStorage (10.18s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-563790 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-563790 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.733251726s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8b1dae7d-17a3-4c05-9168-f46fe075c896","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-563790] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0aa5cdd1-8efa-4f13-b169-1bfdbe8f1765","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19546"}}
	{"specversion":"1.0","id":"5fcaf59c-7a25-4c70-aa61-e910580f3cb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4396f0e7-a0fd-4ee5-bdae-fb2f94d4a311","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19546-2238/kubeconfig"}}
	{"specversion":"1.0","id":"c3be033f-28f9-4f13-b85b-4888f1a494de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-2238/.minikube"}}
	{"specversion":"1.0","id":"d021f342-ce06-4ba7-a9c0-395998d20479","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"19ccc2a8-b09e-4e55-bb70-5c86e576dec9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c78f058d-323c-4b82-8e35-e0f08e32f8ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"cb210614-c881-4f2e-90f0-e3d2b1be6608","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a227f9bf-b8e1-43a4-b5b6-9c29cbff855c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"087efdea-f9fb-4e12-91df-845c6c82a5c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"1f19b911-cd75-4771-8967-db194afa0ea4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-563790\" primary control-plane node in \"insufficient-storage-563790\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f2ff05e3-73fa-4df3-a1d9-d8a2270b012d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727731891-master ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"cac69f5f-d005-4e77-90d2-27adb6ddc3e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"62201162-91bf-44e6-a17f-6018431a65ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-563790 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-563790 --output=json --layout=cluster: exit status 7 (287.773232ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-563790","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-563790","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 03:40:53.555997  153573 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-563790" does not appear in /home/jenkins/minikube-integration/19546-2238/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-563790 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-563790 --output=json --layout=cluster: exit status 7 (276.922759ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-563790","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-563790","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 03:40:53.832582  153636 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-563790" does not appear in /home/jenkins/minikube-integration/19546-2238/kubeconfig
	E1004 03:40:53.843291  153636 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/insufficient-storage-563790/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-563790" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-563790
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-563790: (1.881597847s)
--- PASS: TestInsufficientStorage (10.18s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (81.18s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.864290627 start -p running-upgrade-505617 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.864290627 start -p running-upgrade-505617 --memory=2200 --vm-driver=docker  --container-runtime=crio: (35.669804782s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-505617 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1004 03:45:36.507925    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-505617 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.625471639s)
helpers_test.go:175: Cleaning up "running-upgrade-505617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-505617
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-505617: (3.075722483s)
--- PASS: TestRunningBinaryUpgrade (81.18s)

                                                
                                    
x
+
TestKubernetesUpgrade (393.16s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-904287 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-904287 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m11.633653416s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-904287
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-904287: (1.360253099s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-904287 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-904287 status --format={{.Host}}: exit status 7 (93.284893ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-904287 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-904287 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m40.545306183s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-904287 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-904287 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-904287 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (96.234359ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-904287] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-2238/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-2238/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-904287
	    minikube start -p kubernetes-upgrade-904287 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9042872 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-904287 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-904287 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-904287 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.057980409s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-904287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-904287
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-904287: (2.254526935s)
--- PASS: TestKubernetesUpgrade (393.16s)

                                                
                                    
x
+
TestMissingContainerUpgrade (172.17s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2622840502 start -p missing-upgrade-014414 --memory=2200 --driver=docker  --container-runtime=crio
E1004 03:41:26.675174    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2622840502 start -p missing-upgrade-014414 --memory=2200 --driver=docker  --container-runtime=crio: (1m32.700865003s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-014414
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-014414: (10.424533104s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-014414
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-014414 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-014414 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m5.210907853s)
helpers_test.go:175: Cleaning up "missing-upgrade-014414" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-014414
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-014414: (3.134249322s)
--- PASS: TestMissingContainerUpgrade (172.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-324508 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-324508 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (91.193545ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-324508] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-2238/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-2238/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-324508 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-324508 --driver=docker  --container-runtime=crio: (36.191512472s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-324508 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-324508 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-324508 --no-kubernetes --driver=docker  --container-runtime=crio: (16.005968654s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-324508 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-324508 status -o json: exit status 2 (403.89593ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-324508","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-324508
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-324508: (3.779326099s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-324508 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-324508 --no-kubernetes --driver=docker  --container-runtime=crio: (6.013237002s)
--- PASS: TestNoKubernetes/serial/Start (6.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-324508 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-324508 "sudo systemctl is-active --quiet service kubelet": exit status 1 (323.096363ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-324508
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-324508: (1.282377423s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-324508 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-324508 --driver=docker  --container-runtime=crio: (7.30867707s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-324508 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-324508 "sudo systemctl is-active --quiet service kubelet": exit status 1 (326.529541ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (65.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3626153576 start -p stopped-upgrade-917470 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3626153576 start -p stopped-upgrade-917470 --memory=2200 --vm-driver=docker  --container-runtime=crio: (33.609055379s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3626153576 -p stopped-upgrade-917470 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3626153576 -p stopped-upgrade-917470 stop: (2.517109683s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-917470 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1004 03:44:29.747038    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-917470 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.273594734s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (65.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-917470
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-917470: (1.161337442s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                    
x
+
TestPause/serial/Start (82.68s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-261592 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1004 03:46:26.675480    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-261592 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m22.676471815s)
--- PASS: TestPause/serial/Start (82.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-808103 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-808103 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (175.993419ms)

                                                
                                                
-- stdout --
	* [false-808103] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-2238/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-2238/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:48:48.965624  193931 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:48:48.965844  193931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:48:48.965877  193931 out.go:358] Setting ErrFile to fd 2...
	I1004 03:48:48.965898  193931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:48:48.966164  193931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-2238/.minikube/bin
	I1004 03:48:48.966608  193931 out.go:352] Setting JSON to false
	I1004 03:48:48.967513  193931 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5474,"bootTime":1728008255,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1004 03:48:48.967639  193931 start.go:139] virtualization:  
	I1004 03:48:48.970972  193931 out.go:177] * [false-808103] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1004 03:48:48.974400  193931 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:48:48.974432  193931 notify.go:220] Checking for updates...
	I1004 03:48:48.977051  193931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:48:48.980080  193931 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-2238/kubeconfig
	I1004 03:48:48.982785  193931 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-2238/.minikube
	I1004 03:48:48.985313  193931 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1004 03:48:48.987933  193931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:48:48.991165  193931 config.go:182] Loaded profile config "force-systemd-flag-922138": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:48:48.991283  193931 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:48:49.021503  193931 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1004 03:48:49.021657  193931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:48:49.078202  193931 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:51 SystemTime:2024-10-04 03:48:49.068623763 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:48:49.078318  193931 docker.go:318] overlay module found
	I1004 03:48:49.081275  193931 out.go:177] * Using the docker driver based on user configuration
	I1004 03:48:49.083958  193931 start.go:297] selected driver: docker
	I1004 03:48:49.083975  193931 start.go:901] validating driver "docker" against <nil>
	I1004 03:48:49.083990  193931 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:48:49.087054  193931 out.go:201] 
	W1004 03:48:49.089609  193931 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1004 03:48:49.092263  193931 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-808103 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-808103

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-808103

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-808103

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-808103

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-808103

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-808103

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-808103

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-808103

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-808103

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-808103

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-808103

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-808103" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-808103" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-808103

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808103"

                                                
                                                
----------------------- debugLogs end: false-808103 [took: 4.202608113s] --------------------------------
helpers_test.go:175: Cleaning up "false-808103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-808103
--- PASS: TestNetworkPlugins/group/false (4.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (154.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-090696 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E1004 03:50:36.507613    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:51:26.675505    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-090696 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m34.272216854s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (154.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-090696 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dcbae811-518b-4853-ae53-7312a582b4d3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dcbae811-518b-4853-ae53-7312a582b4d3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.005077556s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-090696 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-090696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-090696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.397992641s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-090696 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (65.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-673553 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-673553 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m5.200342997s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (65.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-090696 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-090696 --alsologtostderr -v=3: (13.852439934s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-090696 -n old-k8s-version-090696
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-090696 -n old-k8s-version-090696: exit status 7 (85.520018ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-090696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (379.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-090696 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-090696 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (6m18.932532629s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-090696 -n old-k8s-version-090696
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (379.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-673553 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c409fe5b-9928-48be-9635-9ddaab4676e7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c409fe5b-9928-48be-9635-9ddaab4676e7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005109137s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-673553 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-673553 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-673553 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-673553 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-673553 --alsologtostderr -v=3: (11.967252636s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-673553 -n no-preload-673553
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-673553 -n no-preload-673553: exit status 7 (73.714538ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-673553 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (265.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-673553 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1004 03:55:36.508229    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:56:26.675010    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-673553 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m24.983128732s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-673553 -n no-preload-673553
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (265.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-55bgp" [6e5dc0fc-d555-4494-b6d9-1e87922135c2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005831105s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-55bgp" [6e5dc0fc-d555-4494-b6d9-1e87922135c2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004770068s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-673553 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-673553 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-673553 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-673553 -n no-preload-673553
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-673553 -n no-preload-673553: exit status 2 (309.413985ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-673553 -n no-preload-673553
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-673553 -n no-preload-673553: exit status 2 (314.97783ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-673553 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-673553 -n no-preload-673553
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-673553 -n no-preload-673553
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (78.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-685775 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-685775 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m18.167863073s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (78.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-wrtb9" [e6697c29-5105-4d04-9a08-e709e50c4969] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003742575s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-wrtb9" [e6697c29-5105-4d04-9a08-e709e50c4969] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004783255s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-090696 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-090696 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-090696 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-090696 --alsologtostderr -v=1: (1.0160185s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-090696 -n old-k8s-version-090696
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-090696 -n old-k8s-version-090696: exit status 2 (339.778378ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-090696 -n old-k8s-version-090696
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-090696 -n old-k8s-version-090696: exit status 2 (306.009853ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-090696 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-090696 -n old-k8s-version-090696
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-090696 -n old-k8s-version-090696
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-626766 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-626766 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (53.444633011s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-685775 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9c9fb987-d03c-462c-8110-0ba74ac0ebe3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9c9fb987-d03c-462c-8110-0ba74ac0ebe3] Running
E1004 04:00:36.507396    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005198226s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-685775 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-685775 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-685775 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-685775 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-685775 --alsologtostderr -v=3: (11.962890905s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-626766 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8fd37b58-a779-4472-9470-5b36ffdadae5] Pending
helpers_test.go:344: "busybox" [8fd37b58-a779-4472-9470-5b36ffdadae5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8fd37b58-a779-4472-9470-5b36ffdadae5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004229043s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-626766 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-685775 -n embed-certs-685775
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-685775 -n embed-certs-685775: exit status 7 (69.949156ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-685775 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (275.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-685775 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-685775 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m34.928690805s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-685775 -n embed-certs-685775
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (275.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-626766 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-626766 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.149279628s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-626766 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-626766 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-626766 --alsologtostderr -v=3: (12.129484259s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-626766 -n default-k8s-diff-port-626766
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-626766 -n default-k8s-diff-port-626766: exit status 7 (104.081287ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-626766 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-626766 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1004 04:01:09.748552    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:01:26.675757    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:02:47.595548    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/old-k8s-version-090696/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:02:47.602061    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/old-k8s-version-090696/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:02:47.613452    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/old-k8s-version-090696/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:02:47.634831    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/old-k8s-version-090696/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:02:47.676259    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/old-k8s-version-090696/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:02:47.757751    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/old-k8s-version-090696/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:02:47.919340    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/old-k8s-version-090696/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:02:48.241249    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/old-k8s-version-090696/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:02:48.883211    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/old-k8s-version-090696/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:02:50.165489    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/old-k8s-version-090696/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:02:52.726987    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/old-k8s-version-090696/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:02:57.848984    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/old-k8s-version-090696/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:03:08.090966    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/old-k8s-version-090696/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:03:28.572699    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/old-k8s-version-090696/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:04:03.795100    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/no-preload-673553/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:04:03.801499    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/no-preload-673553/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:04:03.812842    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/no-preload-673553/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:04:03.834223    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/no-preload-673553/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:04:03.875595    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/no-preload-673553/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:04:03.957643    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/no-preload-673553/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:04:04.119682    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/no-preload-673553/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:04:04.441448    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/no-preload-673553/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:04:05.083230    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/no-preload-673553/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:04:06.364722    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/no-preload-673553/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:04:08.927073    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/no-preload-673553/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:04:09.535002    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/old-k8s-version-090696/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:04:14.049050    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/no-preload-673553/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:04:24.290764    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/no-preload-673553/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:04:44.772717    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/no-preload-673553/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:05:19.576907    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:05:25.734398    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/no-preload-673553/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-626766 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m27.824408172s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-626766 -n default-k8s-diff-port-626766
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2vhls" [a3274fdf-9f47-4109-87eb-a9103adc0015] Running
E1004 04:05:31.456879    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/old-k8s-version-090696/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005249069s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2vhls" [a3274fdf-9f47-4109-87eb-a9103adc0015] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0040742s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-685775 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5b45x" [68a77f47-45d6-43b5-b282-68f3f34beca5] Running
E1004 04:05:36.508109    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003762389s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-685775 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-685775 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-685775 -n embed-certs-685775
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-685775 -n embed-certs-685775: exit status 2 (314.295644ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-685775 -n embed-certs-685775
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-685775 -n embed-certs-685775: exit status 2 (315.682422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-685775 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-685775 -n embed-certs-685775
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-685775 -n embed-certs-685775
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5b45x" [68a77f47-45d6-43b5-b282-68f3f34beca5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004023239s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-626766 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-541304 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-541304 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (43.467375178s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-626766 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-626766 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-626766 -n default-k8s-diff-port-626766
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-626766 -n default-k8s-diff-port-626766: exit status 2 (324.743286ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-626766 -n default-k8s-diff-port-626766
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-626766 -n default-k8s-diff-port-626766: exit status 2 (318.669857ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-626766 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-626766 --alsologtostderr -v=1: (1.43139637s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-626766 -n default-k8s-diff-port-626766
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-626766 -n default-k8s-diff-port-626766
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-808103 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1004 04:06:26.675172    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/addons-561541/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-808103 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m27.263453313s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-541304 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-541304 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.894282467s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-541304 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-541304 --alsologtostderr -v=3: (1.308256397s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-541304 -n newest-cni-541304
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-541304 -n newest-cni-541304: exit status 7 (81.837446ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-541304 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-541304 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1004 04:06:47.656493    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/no-preload-673553/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-541304 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (16.52764476s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-541304 -n newest-cni-541304
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-541304 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-541304 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-541304 -n newest-cni-541304
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-541304 -n newest-cni-541304: exit status 2 (303.098644ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-541304 -n newest-cni-541304
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-541304 -n newest-cni-541304: exit status 2 (296.21799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-541304 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-541304 -n newest-cni-541304
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-541304 -n newest-cni-541304
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.01s)
E1004 04:12:23.277419    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/auto-808103/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:12:23.283876    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/auto-808103/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:12:23.295293    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/auto-808103/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:12:23.316657    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/auto-808103/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:12:23.358113    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/auto-808103/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:12:23.439618    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/auto-808103/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:12:23.601129    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/auto-808103/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:12:23.922975    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/auto-808103/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:12:24.565143    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/auto-808103/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:12:25.846540    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/auto-808103/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:12:28.408468    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/auto-808103/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:12:33.530132    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/auto-808103/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:12:43.772060    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/auto-808103/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (76.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-808103 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-808103 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m16.145974472s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (76.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-808103 "pgrep -a kubelet"
I1004 04:07:22.993849    7560 config.go:182] Loaded profile config "auto-808103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-808103 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tdxc9" [f3ad058f-8228-4749-8b87-ea5c882debb8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tdxc9" [f3ad058f-8228-4749-8b87-ea5c882debb8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.007195765s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-808103 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-808103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-808103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-808103 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-808103 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m4.875672101s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6gk68" [36681a98-defc-4275-9bb3-ea5801259f8a] Running
E1004 04:08:15.298226    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/old-k8s-version-090696/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004971658s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-808103 "pgrep -a kubelet"
I1004 04:08:16.255030    7560 config.go:182] Loaded profile config "kindnet-808103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-808103 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7jphd" [32811f29-0957-46df-961c-814fee34a4b6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7jphd" [32811f29-0957-46df-961c-814fee34a4b6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.00463456s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-808103 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-808103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-808103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-808103 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-808103 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m3.860768614s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5hx2f" [512054a8-b851-46b2-9f14-9f2984dc8882] Running
E1004 04:09:03.795013    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/no-preload-673553/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00545511s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-808103 "pgrep -a kubelet"
I1004 04:09:04.919712    7560 config.go:182] Loaded profile config "calico-808103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-808103 replace --force -f testdata/netcat-deployment.yaml
I1004 04:09:05.234439    7560 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8h7wn" [fed8ec8b-1419-46d0-a736-3899eac917a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8h7wn" [fed8ec8b-1419-46d0-a736-3899eac917a7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004388173s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-808103 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-808103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-808103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (74.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-808103 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-808103 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m14.031465309s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (74.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-808103 "pgrep -a kubelet"
I1004 04:09:56.977904    7560 config.go:182] Loaded profile config "custom-flannel-808103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-808103 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-n22pl" [4cc086c2-2e74-46c3-92ea-d43222344f1c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-n22pl" [4cc086c2-2e74-46c3-92ea-d43222344f1c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004896198s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-808103 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-808103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-808103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-808103 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1004 04:10:36.509016    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/functional-154453/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:10:44.316391    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/default-k8s-diff-port-626766/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:10:44.322870    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/default-k8s-diff-port-626766/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:10:44.334219    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/default-k8s-diff-port-626766/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:10:44.355567    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/default-k8s-diff-port-626766/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:10:44.396892    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/default-k8s-diff-port-626766/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:10:44.478401    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/default-k8s-diff-port-626766/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:10:44.639927    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/default-k8s-diff-port-626766/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:10:44.961767    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/default-k8s-diff-port-626766/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:10:45.604022    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/default-k8s-diff-port-626766/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:10:46.886274    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/default-k8s-diff-port-626766/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:10:49.447610    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/default-k8s-diff-port-626766/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:10:54.568884    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/default-k8s-diff-port-626766/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-808103 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (58.634908516s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-808103 "pgrep -a kubelet"
I1004 04:10:56.354821    7560 config.go:182] Loaded profile config "enable-default-cni-808103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-808103 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kqbfb" [ff72c924-8d14-4eaa-8fe6-dc715c8b522d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kqbfb" [ff72c924-8d14-4eaa-8fe6-dc715c8b522d] Running
E1004 04:11:04.810505    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/default-k8s-diff-port-626766/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004409305s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-808103 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-808103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-808103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-808103 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-808103 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m16.614140069s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-g9cpc" [2c5a8376-1c1d-4800-90b2-59af1131a3fd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004873835s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-808103 "pgrep -a kubelet"
I1004 04:11:37.276285    7560 config.go:182] Loaded profile config "flannel-808103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-808103 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vkjnt" [ad2f7d23-6725-46da-8a47-b34d24db69aa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vkjnt" [ad2f7d23-6725-46da-8a47-b34d24db69aa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.004256685s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-808103 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-808103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-808103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-808103 "pgrep -a kubelet"
I1004 04:12:45.690836    7560 config.go:182] Loaded profile config "bridge-808103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-808103 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pwtxm" [8fa1e6cf-b09a-404e-b0ae-e5d777d0e2b7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1004 04:12:47.595591    7560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/old-k8s-version-090696/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-pwtxm" [8fa1e6cf-b09a-404e-b0ae-e5d777d0e2b7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004241226s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-808103 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-808103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-808103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    

Test skip (29/323)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.51s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-973464 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-973464" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-973464
--- SKIP: TestDownloadOnlyKic (0.51s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:796: skipping: crio not supported
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-561541 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:423: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-933254" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-933254
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-808103 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-808103

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-808103

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-808103

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-808103

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-808103

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-808103

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-808103

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-808103

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-808103

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-808103

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-808103

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-808103" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-808103" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19546-2238/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 04 Oct 2024 03:48:45 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: force-systemd-flag-922138
contexts:
- context:
cluster: force-systemd-flag-922138
extensions:
- extension:
last-update: Fri, 04 Oct 2024 03:48:45 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: force-systemd-flag-922138
name: force-systemd-flag-922138
current-context: force-systemd-flag-922138
kind: Config
preferences: {}
users:
- name: force-systemd-flag-922138
user:
client-certificate: /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/force-systemd-flag-922138/client.crt
client-key: /home/jenkins/minikube-integration/19546-2238/.minikube/profiles/force-systemd-flag-922138/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-808103

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808103"

                                                
                                                
----------------------- debugLogs end: kubenet-808103 [took: 4.830031049s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-808103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-808103
--- SKIP: TestNetworkPlugins/group/kubenet (4.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-808103 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-808103

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-808103

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-808103

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-808103

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-808103

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-808103

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-808103

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-808103

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-808103

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-808103

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-808103

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-808103" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-808103

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-808103

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-808103

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-808103

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-808103" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-808103" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-808103

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-808103" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808103"

                                                
                                                
----------------------- debugLogs end: cilium-808103 [took: 5.303362496s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-808103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-808103
--- SKIP: TestNetworkPlugins/group/cilium (5.49s)

                                                
                                    
Copied to clipboard